map评价吗 voc数据集可以用coco_yolov3 计算coco、voc数据集map

计算voc数据集MAP

1、首先下载voc数据集

wget https://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar

wget https://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar

wget https://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar

tar xf VOCtrainval_11-May-2012.tar

tar xf VOCtrainval_06-Nov-2007.tar

tar xf VOCtest_06-Nov-2007.tar

会在当前路径下生成VOCdevkit文件夹

2、将voc的xml标签格式转为coco的txt格式

yolov3中scripts中的_label.py脚本

import xml.etree.ElementTree as ET

import pickle

import os

from os import listdir, getcwd

from os.path import join

sets=[('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]

classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]

def convert(size, box):

dw = 1./(size[0])

dh = 1./(size[1])

x = (box[0] + box[1])/2.0 - 1

y = (box[2] + box[3])/2.0 - 1

w = box[1] - box[0]

h = box[3] - box[2]

x = x*dw

w = w*dw

y = y*dh

h = h*dh

return (x,y,w,h)

def convert_annotation(year, image_id):

in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))

out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')

tree=ET.parse(in_file)

root = tree.getroot()

size = root.find('size')

w = int(size.find('width').text)

h = int(size.find('height').text)

for obj in root.iter('object'):

difficult = obj.find('difficult').text

cls = obj.find('name').text

if cls not in classes or int(difficult)==1:

continue

cls_id = classes.index(cls)

xmlbox = obj.find('bndbox')

b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))

bb = convert((w,h), b)

out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')

wd = getcwd()

for year, image_set in sets:

if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):

os.makedirs('VOCdevkit/VOC%s/labels/'%(year))

image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()

list_file = open('%s_%s.txt'%(year, image_set), 'w')

for image_id in image_ids:

list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))

convert_annotation(year, image_id)

list_file.close()

os.system("cat 2007_train.txt 2007_val.txt 2012_train.txt 2012_val.txt > train.txt")

os.system("cat 2007_train.txt 2007_val.txt 2007_test.txt 2012_train.txt 2012_val.txt > train.all.txt")

3、运行脚本之后,ls查看当前路径,有以下文件:

ls

2007_test.txt VOCdevkit

2007_train.txt voc_label.py

2007_val.txt VOCtest_06-Nov-2007.tar

2012_train.txt VOCtrainval_06-Nov-2007.tar

2012_val.txt VOCtrainval_11-May-2012.tar

4、将2007与2012数据集合并

cat 2007_train.txt 2007_val.txt 2012_*.txt > train.txt

2007+2012作为训练集train,2007test作为测试集2007_test。

5、修改voc.data文件

1 classes= 20

2 train = /train.txt

3 valid = 2007_test.txt

4 names = data/voc.names

5 backup = backup

6、在测试集上测试

./darknet detector valid cfg/voc.data cfg/yolov3-voc.cfg backup/yolov3-voc_final.weights -out "" -gpu 0 -thresh .5

在darknet/results目录下生成20个类别txt文件,包含测试结果。

7、计算类别map

注意修改文件路径 ,运行脚本,输出单个类别ap及总的map。 并在当前路径下生成annots.pkl文件。若后续需要重新运行脚本计算map,要先将annots.pkl文件删除。

#coding=utf-8

from voc_eval import voc_eval

import os

current_path = os.getcwd()

results_path = current_path+"/results"

sub_files = os.listdir(results_path)

mAP = []

for i in range(len(sub_files)):

class_name = sub_files[i].split(".txt")[0]

rec, prec, ap = voc_eval('/.../darknet-master/results/{}.txt', '/.../darknet-master/vocdata/VOCdevkit/VOC2007/Annotations/{}.xml', '/.../darknet-master/vocdata/VOCdevkit/VOC2007/ImageSets/Main/test.txt', class_name, '.')

print("{} :\t {} ".format(class_name, ap))

mAP.append(ap)

mAP = tuple(mAP)

print("***************************")

print("mAP :\t {}".format( float( sum(mAP)/len(mAP)) ))

计算单个类别的ap值

# encoding: utf-8

from voc_eval import voc_eval

rec,prec,ap=voc_eval('/.../darknet-master/results/{}.txt', '/.../VOCdevkit/VOC2007/Annotations/{}.xml', '/.../VOCdevkit/VOC2007/ImageSets/Main/test.txt', 'person', '.')

print('rec',rec)

print('prec',prec)

print('ap',ap)

计算coco数据集MAP

1、首先下载coco数据集

darknet-master目录下打开终端

cp scripts/get_coco_dataset.sh data

cd data

bash get_coco_dataset.sh

在darknet-master/data下会生成coco文件夹,包含所有数据。大概29G

2、修改cfg/coco.data文件

1 classes= 80

2 train = data/coco/trainvalno5k.txt

3 valid = data/coco/5k.txt

4 names = data/coco.names

5 backup = backup

3、测试,生成json文件

./darknet detector valid cfg/coco.data cfg/yolov3.cfg yolov3.weights

测试5000张图片,在results文件夹下生成coco_results.json文件。

4、运行python脚本计算map

#-*- coding:utf-8 -*-

import matplotlib.pyplot as plt

from pycocotools.coco import COCO

from pycocotools.cocoeval import COCOeval

import numpy as np

import skimage.io as io

import pylab,json

pylab.rcParams['figure.figsize'] = (10.0, 8.0)

def get_img_id(file_name):

ls = []

myset = []

annos = json.load(open(file_name, 'r'))

for anno in annos:

ls.append(anno['image_id'])

myset = {}.fromkeys(ls).keys()

return myset

if __name__ == '__main__':

annType = ['segm', 'bbox', 'keypoints']#set iouType to 'segm', 'bbox' or 'keypoints'

annType = annType[1] # specify type here

cocoGt_file = '/.../data/coco2014/annotations/instances_val2014.json'

cocoGt = COCO(cocoGt_file)#取得标注集中coco json对象

cocoDt_file = '/.../darknet/results/coco_results.json'

imgIds = get_img_id(cocoDt_file)

print len(imgIds)

cocoDt = cocoGt.loadRes(cocoDt_file)#取得结果集中image json对象

imgIds = sorted(imgIds)#按顺序排列coco标注集image_id

imgIds = imgIds[0:5000]#标注集中的image数据

cocoEval = COCOeval(cocoGt, cocoDt, annType)

cocoEval.params.imgIds = imgIds#参数设置

cocoEval.evaluate()#评价

cocoEval.accumulate()#积累

cocoEval.summarize()#总结

将脚本中的路径改称自己数据的对应路径。

报错:ImportError: No module named pycocotools.coco

解决:即没有安装该包, pip install pycocotools即可。

报错:ImportError: No module named skimage.io

解决:pip install scikit-image

5、结果  608x608

$ python calcocomap.py

loading annotations into memory...

Done (t=3.49s)

creating index...

index created!

4991

Loading and preparing results...

DONE (t=2.42s)

creating index...

index created!

Running per image evaluation...

Evaluate annotation type *bbox*

DONE (t=29.24s).

Accumulating evaluation results...

DONE (t=3.07s).

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.334

Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.585

Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.345

Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.194

Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.365

Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.439

Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.291

Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.446

Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.470

Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.304

Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.502

Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.593

参考链接:

你可能感兴趣的:(map评价吗,voc数据集可以用coco)