mAP in FCIS

Accuracy is evaluated by mean average precision, mAPr, at mask-level IoU (intersection-over-union) thresholds at 0.5 and 0.7

  1. 网络会输出多个结果,每个结果有不同的score,score从大到小排列
  2. 对于每一个predication,跟ground truth计算IoU,如果超过thredshold(0.5/0.7)就标记为true positive;如果不超过(或者这个instance已经detect过了)就标记为false positive
  3. 如果有5个predication,fp和tp就是一个1*5的向量,由0/1组成。分别累加,就可以知道每增加一个predication,fp和tp如何变化。
    比如fp是0 0 0 1 1,tp是1 1 1 0 0。累加后fp为0 0 0 1 2,tp为1 2 3 3 3。看到增加predication后,fp升高,tp不变。
  4. 计算recall和precision,就可以得到5组recall-precision数据,可以表示成折线图
    Recall = tp/ground truth个数
    Precision = tp/prediction个数(tp+fp)
mAP in FCIS_第1张图片
image.png
  1. ap(average precision)由下面公式计算,意思是平均precision(FCIS的代码中range是np.arange(0., 1.1, 0.1))

Pinterp(r)在recall为r时插值precision的结果,这里的插值用下面公式计算。在recall为r时,找出所有比r大的recall对应的precision,取其最大值。

原因->The intention in interpolating the precision/recall curve in this way is to reduce the impact of the “wiggles” in the precision/recall curve, caused by small variations in the ranking of examples. It should be noted that to obtain a high score, a method must have precision at all levels of recall – this penalises methods which retrieve only a subset of examples with high precision.

  1. 上面的fp/tp/ap都是针对一个class计算的,mAP就是对所有class的AP求平均。

你可能感兴趣的:(mAP in FCIS)