MSCOCO数据集评估caption_evaluation

一、eval时,caption需准备格式如下(使用json.dump保存到文件):

annotation{
    "id": int,
    "image_id": int,
    "caption": str,
}

二、evaldemo/代码

这里提供的评估代码可以用于获取公开可用的COCO验证集的结果。它计算多个常用指标,包括BLEU、METEOR、ROUGE-L和CIDEr(包含每个度量的引用和描述)。 


1. coco

官网http://cocodataset.org/#download 下提供的代码地址:https://github.com/cocodataset/cocoapi
其中带有coco的评估代码,会随着当初安装cocoapi时一同安装。
但此处的cocoeval只用于keypoint与instances,不能用于caption。

demo.py

from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval 
import numpy as np 
annType = ['segm','bbox','keypoints']
annType = annType[1]      #specify type here\n",
#initialize COCO ground truth api
dataDir='./coco'
dataType='val2014'
annFile = '%s/annotations/%s_%s.json'%(dataDir,prefix,dataType)
cocoGt=COCO(annFile)
#initialize COCO detections api
resFile='./coco/results/%s_%s_fakecap_results.json'#captions_val2014_fakecap_results.json
resFile = resFile%(prefix, dataType)
cocoDt=cocoGt.loadRes(resFile)
imgIds=sorted(cocoGt.getImgIds())
imgIds=imgIds[0:100]
imgId = imgIds[np.random.randint(100)]
prefix = 'person_keypoints' if annType=='keypoints' else 'instances'
print ('Running demo for *%s* results.'%(annType))
# running evaluation\n",
cocoEval = COCOeval(cocoGt, cocoDt, annType)# ,
cocoEval.params.imgIds = imgIds
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()

2. coco-caption

官网http://cocodataset.org/#captions-eval 下提供的代码地址:https://github.com/tylin/coco-caption
其中带有coco专用于caption的评估代码 

demo.py :  参考 https://github.com/tylin/coco-caption/blob/master/cocoEvalCapDemo.ipynb
    from pycocotools.coco import COCO    
    from pycocoevalcap.eval import COCOEvalCap    
    dataDir='.'
    dataType='val2014'
    algName = 'fakecap'
    annFile='%s/annotations/captions_%s.json'%(dataDir,dataType)
    subtypes=['results', 'evalImgs', 'eval']        
    resFile='%s/results/captions_%s_%s_%s.json'%(dataDir,dataType,algName,subtype)    
    coco = COCO(annFile)        
    cocoRes = coco.loadRes(resFile)     
    cocoEval = COCOEvalCap(coco, cocoRes)
    for metric, score in cocoEval.eval.items():
        print '%s: %.3f'%(metric, score)
# demo how to use evalImgs to retrieve low score result
evals = [eva for eva in cocoEval.evalImgs if eva['CIDEr']<30]
print ('ground truth captions')
imgId = evals[0]['image_id']
annIds = coco.getAnnIds(imgIds=imgId)
anns = coco.loadAnns(annIds)
coco.showAnns(anns)
print ('\n'+'generated caption (CIDEr score %0.1f)'%(evals[0]['CIDEr']))
annIds = cocoRes.getAnnIds(imgIds=imgId)
anns = cocoRes.loadAnns(annIds)
coco.showAnns(anns) 
img = coco.loadImgs(imgId)[0]
# I = io.imread('%s/images/%s/%s'%(dataDir,dataType,img['file_name']))
# plt.imshow(I)
# plt.axis('off')
# plt.show()    
# plot score histogram
# ciderScores = [eva['CIDEr'] for eva in cocoEval.evalImgs]
# plt.hist(ciderScores)
# plt.title('Histogram of CIDEr Scores', fontsize=20)
# plt.xlabel('CIDEr score', fontsize=20)
# plt.ylabel('result counts ', fontsize=20)
# plt.show()
# save evaluation results to ./results folder
json.dump(cocoEval.evalImgs, open(evalImgsFile, 'w'))
json.dump(cocoEval.eval,     open(evalFile, 'w'))

注:

1. 无法fanqiang的,官网下内容可以参照该链接内容:http://blog.csdn.net/ccbrid/article/details/79368639 末尾

2. result参考格式:https://github.com/tylin/coco-caption/blob/master/results/captions_val2014_fakecap_results.json

3. 作为一个包直接用,不用安装

4. 注意要求python版本为2.7,可强行改成python3语法,但会出现很多错误。。


三、评价指标

eval{
"BLEU_1"     : float,              (blue常用来测机翻)
"BLEU_2"     : float,
"BLEU_3"     : float,
"BLEU_4"     : float,
"METEOR"     : float,
"ROUGE_L"    : float,               (常用来测文摘)
"CIDEr"      : float,
}


四、注意

在使用2014版imagecaption时,test数据集(用于比赛)并没有给出相应的reference,官方论文:为了防止过拟合。

所以可自己划分:(常见train:val:test = 8:1:1),原train82783原val40531,可划分为train82783val20266test20265


五、其他参考

1.  MS COCO数据集目标检测评估(Detection Evaluation)(来自官网)

http://blog.csdn.net/u014734886/article/details/78831884

2.  MS COCO数据集人体关键点评估(Keypoint Evaluation)(来自官网) 

http://blog.csdn.net/u014734886/article/details/78837961

以上1、2两篇来自于同一个博客 // 然而这位老哥并没有写关于 caption 的文章


3. 另一个相关的评测代码,其中封装了cocoeval,格式要求不同。https://github.com/Maluuba/nlg-eval


你可能感兴趣的:(随记)