如何计算map?

1.VOCdevkit/VOC2007/ImageSets/Main/test.txt、val.txt、train.txt、trainval.txt由voc2yolo4.py生成
可以使用trainval_percent和train_percent的比例改变来修改
实验比例的话是trainval_percent=0.9,train_percent=1
根据我来看的代码,使用k-means聚类之后得到的结果不需要改变这边。
————————————2021.4.18——————————————————

D:\ProgramData\Anaconda3\envs\pytorch\python.exe D:/PythonStudySpace/yolov4-pytorch/FPS_test.py
Loading weights into state dict...
Finished!
model_data/yolo4_weights.pth model, anchors, and classes loaded.
2.951487748622894 seconds, 0.3388121805576121FPS, @batch_size 1

Process finished with exit code 0


D:\ProgramData\Anaconda3\envs\pytorch\python.exe D:/PythonStudySpace/yolov4-pytorch/kmeans_for_anchors.py
acc:87.22%
[[ 15.15428571  73.9375    ]
 [ 16.04571429 140.96875   ]
 [ 22.88        85.71875   ]
 [ 23.32571429  40.828125  ]
 [ 54.08        49.359375  ]
 [ 66.85714286  89.78125   ]
 [ 67.74857143  93.84375   ]
 [ 68.64        85.71875   ]
 [ 71.90857143  72.71875   ]]


(pytorch) ubuntu-gpu@UbuntuGPU:~/wangmiao/yolov4-pytorch-master/yolov4-pytorch-master$ python FPS_test.py
Loading weights into state dict...
Finished!
logs/Epoch70-Total_Loss0.9002-Val_Loss2.2391.pth model, anchors, and classes loaded.
0.030595953464508056 seconds, 32.68406069319006FPS, @batch_size 1

('VOCdevkit/VOC2007/ImageSets/Main/test.txt:352
./VOCdevkit/VOC2007/JPEGImages/testImages/"+image_id+".jpg":353个

统计某文件夹下文件的个数

train and val size 1052
traub suze 1052:353

统计某文件夹下目录的个数

ls -l |grep "^d"|wc -l

统计文件夹下文件的个数,包括子文件夹里的

ls -lR|grep "^-"|wc -l
~/wangmiao/yolov4-pytorch-master/VOCdevkit/VOC2007/JPEGImages$
ls -l |grep "^-"|wc -l
:825
ubuntu-gpu@UbuntuGPU:~
/wangmiao/yolov4-pytorch-master/VOCdevkit/VOC2007/JPEGImages$
 ls -l |grep "^d"|wc -l
0
ubuntu-gpu@UbuntuGPU:~
/wangmiao/yolov4-pytorch-master/VOCdevkit/VOC2007/JPEGImages$
 ls -lR|grep "^-"|wc -l
1178
ubuntu-gpu@UbuntuGPU:
~/wangmiao/yolov4-pytorch-master/VOCdevkit/VOC2007/JPEGImages/testImages$
 ls -lR|grep "^-"|wc -l
353

ubuntu-gpu@UbuntuGPU:
~/wangmiao/yolov4-pytorch-master/VOCdevkit/VOC2007/Annotations$
  ls -l |grep "^-"|wc -l
826
ubuntu-gpu@UbuntuGPU
:~/wangmiao/yolov4-pytorch-master/VOCdevkit/VOC2007/Annotations/Annotations$
 ls -l |grep "^-"|wc -l
353
ubuntu-gpu@UbuntuGPU:
~/wangmiao/yolov4-pytorch-master/VOCdevkit/VOC2007/Annotations/Annotations1$
 ls -l |grep "^-"|wc -l
352

xmlfilepath=r'./Annotations'
saveBasePath=r"./ImageSets/Main/"

25 ftrainval = open(os.path.join(saveBasePath,'trainval.txt'), 'w')
 26 #ftest = open(os.path.join(saveBasePath,'test.txt'), 'w')  
 27 ftrain = open(os.path.join(saveBasePath,'train.txt'), 'w')
 28 fval = open(os.path.join(saveBasePath,'val.txt'), 'w')

VOCdevkit/VOC2007/ImageSets/Main/test.txt:352
('VOCdevkit/VOC2007/ImageSets/Main/train.txt:741

——————————————————————————————————
2.此处对应的图像
通过kmeans_for_anchors可以得到yolo_anchors.txt

./VOCdevkit/VOC2007/JPEGImages/testImages/"+image_id+".jpg
python get_dr_txt.py 得到./input/detection-results
./input/images-optional

3.VOCdevkit/VOC2007/ImageSets/Main/train.txt由此

VOCdevkit/VOC2007/JPEGImages
VOCdevkit/VOC2007/Annotations/

python get_get_txt.py 得到./input/ground-truth

4.VOCdevkit/VOC2007/ImageSets/Main/

test.txt:352、
val.txt:0、
train.txt:741、
trainval.txt:741

二、通过k-means得到聚类anchors
原来的anchors:

1 12, 16,  19, 36,  40, 28,  36, 75,  76, 55,  72, 146,  142, 110,  192, 243,  459, 401

通过kmeans_for_anchor.py ->yolo_anchors.txt

 1 15,73, 16,140, 22,85, 23,40, 54,49, 62,89, 67,92, 68,86, 72,73

新的聚类和原始的有一些不一样,那么我再跑一下实验做一下对比

就是把训练好的权重加到yolo.py这个里面,然后再去改一下这个权重,然后去做预测和map的计算。
Epoch99-Total_Loss0.8858-Val_Loss1.5478.pth
这个loss比较低,效果比较好,拿这个做一下预测
为了保留上一次训练留下的input,使用mv input input1,将其重命名为input1

然后执行上面的操作,使用python get_map.py去生成map的结果一直报错
我发现可能是input生成的问题,因此我重新改变文件路径并重新运行
这次get_gt_txt.py有741
而get_dr_txt.py是352

——————————2021.4.18——————————————

新的input:

detection-results:741
ground-truth:352
images-optional:352

原来的input:

 76 yolo = mAP_Yolo()
 77 image_ids = open('VOCdevkit/VOC2007/ImageSets/Main/test.txt').read().strip().split()
 78 
 79 if not os.path.exists("./input"):
 80     os.makedirs("./input")
 81 if not os.path.exists("./input/detection-results"):
 82     os.makedirs("./input/detection-results")
 83 if not os.path.exists("./input/images-optional"):
 84     os.makedirs("./input/images-optional")
 85 
 86 
 87 for image_id in tqdm(image_ids):
 88     image_path = "./VOCdevkit/VOC2007/JPEGImages/testImages/"+image_id+".jpg"
 89     image = Image.open(image_path)
 90     # 开启后在之后计算mAP可以可视化
 91     image.save("./input/images-optional/"+image_id+".jpg")
 92     yolo.detect_image(image_id,image)

detection-results:1092
ground-truth:1092
images-optional:1092

get_dr_txt.py
         f = open("./input/detection-results/"+image_id+".txt","w")

————————————————p—————————————————
实验结果:

0.00% = Lockingplate AP 	||	score_threhold=0.5 : F1=nan ; Recall=0.00% ; Precision=0.00%
D:/PythonStudySpace/yolov4-pytorch/get_map.py:706: RuntimeWarning: invalid value encountered in true_divide
  F1 = np.array(rec)*np.array(prec)/(np.array(prec)+np.array(rec))*2
0.33% = Upperlever AP 	||	score_threhold=0.5 : F1=0.02 ; Recall=33.33% ; Precision=0.82%
D:/PythonStudySpace/yolov4-pytorch/get_map.py:706: RuntimeWarning: invalid value encountered in true_divide
  F1 = np.array(rec)*np.array(prec)/(np.array(prec)+np.array(rec))*2
74.80% = lockingplate AP 	||	score_threhold=0.5 : F1=0.72 ; Recall=67.28% ; Precision=77.22%
92.82% = normal AP 	||	score_threhold=0.5 : F1=0.90 ; Recall=96.46% ; Precision=85.18%
85.47% = truncatedplugdoorhandle AP 	||	score_threhold=0.5 : F1=0.87 ; Recall=84.85% ; Precision=88.89%
85.04% = upperlever AP 	||	score_threhold=0.5 : F1=0.85 ; Recall=85.25% ; Precision=85.25%
mAP = 48.35%

72.63% = lockingplate AP 	||	score_threhold=0.5 : F1=0.68 ; Recall=78.41% ; Precision=60.00%
86.20% = normal AP 	||	score_threhold=0.5 : F1=0.88 ; Recall=88.99% ; Precision=86.87%
69.40% = truncatedplugdoorhandle AP 	||	score_threhold=0.5 : F1=0.74 ; Recall=77.78% ; Precision=70.00%
55.58% = upperlever AP 	||	score_threhold=0.5 : F1=0.49 ; Recall=51.35% ; Precision=46.34%
mAP = 70.95%

test:601

周日晚上我重新检查了test数据集的txt,并打开文档查验过,都是小写的,标签文件并没有使用大写

truncated plug door handle
upper lever
locking plate
normal

这四种标签类型分别代表了截断塞门手把故障、上拉杆故障、紧锁板故障、以及所有正常的部分
那么导致map计算最后的结果多出来的大写的种类又发生在哪里呢?
1.test数据集已经检查过了
2.标签文件也已经检查过了
input文件夹分为
–input
--------detections-results:存放txt格式的标签文件,1-1179个
----------------------------- 举例:truncated plug door handle 0.9873 85 524 258 634
--------ground-truth:存放txt格式的标签文件,1-1179个
----------------------------- 举例:truncated plug door handle 989 377 1169 503
--------images-optional:存放jpg格式的图像,一共有1179张
我检查了一遍这个文档,都没有大写的类型,那最后结果大写的到底哪里来的呢?
通过检查results这个文档,我发现了
Lockingplate: 1,errors:663 服务器上已修改
Upperlever: 3: ,errors:97、181、488 服务器上已修改
lockingplate: 272
normal: 989
truncatedpIugdoorhandIe: 1,errors:33----->直接改这里
truncatedplugdoorhandle: 66
upperlever: 122
问题处在ground-truth
ctrl+shift+f 全局搜索字符 还可以设置区分大小写
ctrl+shift+n 全局搜索文件名

打包成tar.gz格式压缩包

tar -zcvf renwolesshel.tar.gz /renwolesshel

解压tar.gz格式压缩包

tar zxvf renwolesshel.tar.gz

打包成tar.bz2格式压缩包

tar -jcvf renwolesshel.tar.bz2 /renwolesshel

解压tar.bz2格式的压缩包

tar jxvf renwolesshel.tar.bz2

压缩成zip格式

zip -q -r renwolesshel.zip renwolesshel/

解压zip格式的压缩包

unzip renwolesshel.zip

1,python get_dr_txt.py
2,python get_gt_txt.py
3,mv input.tar.gz ./input3.tar.gz
4,tar -zcvf input.tar.gz ./input
zip -r input.zip ./input
5, sz input.zip

detection-results
现在又开始出现问题了,以后每一次都要记录下来

去检查一下

image_ids = open('VOCdevkit/VOC2007/ImageSets/Main/test.txt').read().strip().split()
VOCdevkit/VOC2007/ImageSets/Main/test.txt:352
('VOCdevkit/VOC2007/ImageSets/Main/train.txt:741if not os.path.exists("./input"):
    os.makedirs("./input")
if not os.path.exists("input/ground-truth"):
    os.makedirs("input/ground-truth")
if not os.path.exists("./input/images-optional"):
    os.makedirs("./input/images-optional")

"./VOCdevkit/VOC2007/JPEGImages/"+image_id+".jpg"

————————————————2020.5.5—————————————————

annotations:1169
-------------------------
train and val size 1052
traub suze 1052

Epoch100-Total_Loss0.6700-Val_Loss2.3610.pth   
Epoch100-Total_Loss0.6705-Val_Loss2.1259.pth   
Epoch100-Total_Loss0.6740-Val_Loss1.6421.pth  
Epoch100-Total_Loss0.6782-Val_Loss2.0120.pth   
Epoch100-Total_Loss0.6836-Val_Loss1.5088.pth   
Epoch100-Total_Loss0.6877-Val_Loss1.8490.pth 
Epoch100-Total_Loss0.7128-Val_Loss2.1981.pth   
Epoch100-Total_Loss0.8817-Val_Loss1.4321.pth  
-----------------kmeans-----------------------------
Epoch100-Total_Loss0.6700-Val_Loss2.3610.pth   
Epoch100-Total_Loss0.6705-Val_Loss2.1259.pth   
Epoch100-Total_Loss0.6740-Val_Loss1.6421.pth   
Epoch100-Total_Loss0.6782-Val_Loss2.0120.pth  
Epoch100-Total_Loss0.6836-Val_Loss1.5088.pth   
Epoch100-Total_Loss0.6877-Val_Loss1.8490.pth   
Epoch100-Total_Loss0.7128-Val_Loss2.1981.pth   
Epoch100-Total_Loss0.7498-Val_Loss1.7838.pth  <------
Epoch100-Total_Loss0.8817-Val_Loss1.4321.pth

————————————————2021.5.6—————————————————

Epoch100-Total_Loss0.7498-Val_Loss1.7838.pth  

————————————————2021.5.12————————————————————
数据集:1179(原始数据集图像数量)-3(图像无法识别的图像数量)=1176
训练数据集:823张图片
测试数据集:353张图片

你可能感兴趣的:(深度学习,目标检测,深度学习,python,神经网络,pytorch)