Dice loss

转自:咖啡味儿的咖啡
https://blog.csdn.net/wangdongwei0/article/details/84576044

Dice Loss

首先定义两个轮廓区域的相似程度,用A、B表示两个轮廓区域所包含的点集,定义为:

                                                            DSC(A,B) = 2\left | A\cap B \right |/(\left | A \right | + \left | B \right |)

那么loss为:

可以看出,Dice Loss其实也可以分为两个部分,一个是前景的loss,一个是物体的loss,但是在实现中,我们往往只关心物体的loss,Keras的实现如下:


   
   
   
   
  1. def dice_coef(y_true, y_pred, smooth=1):
  2. intersection = K.sum(y_true * y_pred, axis=[ 1, 2, 3])
  3. union = K.sum(y_true, axis=[ 1, 2, 3]) + K.sum(y_pred, axis=[ 1, 2, 3])
  4. return K.mean( ( 2. * intersection + smooth) / (union + smooth), axis= 0)
  5. def dice_coef_loss(y_true, y_pred):
  6. 1 - dice_coef(y_true, y_pred, smooth= 1)

上面的代码只是实现了2分类的dice loss,那么多分类的dice loss又应该是什么样的呢?

注:正确性还没验证。。。


   
   
   
   
  1. # y_true and y_pred should be one-hot
  2. # y_true.shape = (None,Width,Height,Channel)
  3. # y_pred.shape = (None,Width,Height,Channel)
  4. def dice_coef(y_true, y_pred, smooth=1):
  5. mean_loss = 0;
  6. for i in range(y_pred.shape( -1)):
  7. intersection = K.sum(y_true[:,:,:,i] * y_pred[:,:,:,i], axis=[ 1, 2, 3])
  8. union = K.sum(y_true[:,:,:,i], axis=[ 1, 2, 3]) + K.sum(y_pred[:,:,:,i], axis=[ 1, 2, 3])
  9. mean_loss += ( 2. * intersection + smooth) / (union + smooth)
  10. return K.mean(mean_loss, axis= 0)
  11. def dice_coef_loss(y_true, y_pred):
  12. 1 - dice_coef(y_true, y_pred, smooth= 1)

 

你可能感兴趣的:(关于深度学习的概念)