[论文笔记]The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

Title

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric


Information

论文地址:https://arxiv.org/abs/1801.03924?context=cs

github地址:

  • PyTorch版本:https://github.com/richzhang/PerceptualSimilarity
  • TensorFlow版本:https://github.com/alexlee-gk/lpips-tensorflow

Summary

作者发现传统的计算图像差距的评价标准不符合人的感知,针对这一情况,他通过传统方法和深度学习的比较,发现能用于视觉任务的深度网络在感知上更加准确。因此基于深度网络提出了Learned Perceptual Image Patch Similarity(LPIPS)的评价标准。

Research Objective

提出一种能准确评价图像间差距的评价标准

Problem Statement

常用的图像相似度评价标准如L2/PSNR/SSIM/FSIM等和人类感知并不相符


image

Method(s)

作者根据深度学习网络计算感知距离


image
image

参考图x, 变形图片x0,通过深度网络F获取距离的方法:

  1. 从深度学习的L层中提取特征,在通道维度上单位归一化,记作y,y0。其中第l层记作,维度是H_l×W_l×C_l
  2. 将第l层的激活层结果利用向量w_l缩放,w_l维度是C_l
  3. 计算l2距离
  4. 将所有通道的结果和在空间的所有层上求平均

其中利用w缩放和计算余弦距离是等价的
image

Evaluation

分别和传统方法和CNN-based方法做比较。实验结果如下图。 其中CNN-based又有三种获得权重的方式

  • lin:获得预训练网络,top训练,其它层固定
  • tune: 获得预训练网络,全部训练
  • scratch:初始化为高斯分布的权重重新训练
image

Conclusion

  1. 解决视觉任务的模型经过训练都能获得对图像感知评价的能力。模型特征越能用于分类和检测,模型的感知能力越强
  2. 提供了一个数据集,包括484K个人类的感知判断

Code

一个简单易理解的pytorch实践,代码来源:https://github.com/S-aiueo32/lpips-pytorch
使用LPIPS loss的外层接口如下。

from lpips_pytorch import LPIPS

# define as a criterion module
criterion = LPIPS(
    net_type='alex',  # choose a network type from ['alex', 'squeeze', 'vgg']
    version='0.1'  # Currently, v0.1 is supported
)
loss = criterion(x, y)

构建LPIPS层如下。

class LPIPS(nn.Module):
    r"""Creates a criterion that measures
    Learned Perceptual Image Patch Similarity (LPIPS).

    Arguments:
        net_type (str): the network type to compare the features: 
                        'alex' | 'squeeze' | 'vgg'. Default: 'alex'.
        version (str): the version of LPIPS. Default: 0.1.
    """
    def __init__(self, net_type: str = 'alex', version: str = '0.1'):

        assert version in ['0.1'], 'v0.1 is only supported now'

        super(LPIPS, self).__init__()

        # pretrained network
        self.net = get_network(net_type) # 获取预训练网络,如AlexNet
        '''
        AlexNet(
          (layers): Sequential(
            (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
            (1): ReLU(inplace)
            (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
            (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
            (4): ReLU(inplace)
            (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
            (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
            (7): ReLU(inplace)
            (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
            (9): ReLU(inplace)
            (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
            (11): ReLU(inplace)
            (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
          )
        )
        '''

        # linear layers
        self.lin = LinLayers(self.net.n_channels_list) # 根据alexnet的通道数建立一个5层网络,不参与训练,获得每一层的特征距离后对应输入lin网络,之后就是基于这五层计算lpips loss。下文有代码
        self.lin.load_state_dict(get_state_dict(net_type, version)) # 从论文作者的github仓库中获取预训练网络的权重并加载

    def forward(self, x: torch.Tensor, y: torch.Tensor):
        feat_x, feat_y = self.net(x), self.net(y) # 获取特征, 输入(Bs,C,H,W),输出(5,channel, H, W),后三维不统一

        diff = [(fx - fy) ** 2 for fx, fy in zip(feat_x, feat_y)] # 基于每层网络计算特征间的l2距离(共5层)
        res = [l(d).mean((2, 3), True) for d, l in zip(diff, self.lin)] # 算每层网络基于通道的l2距离平均值

        return torch.sum(torch.cat(res, 0), 0, True) # 计算所有层的距离和

其中LinLayers的实现如下:

class LinLayers(nn.ModuleList):
    def __init__(self, n_channels_list: Sequence[int]):
        super(LinLayers, self).__init__([
            nn.Sequential(
                nn.Identity(), # 一个什么都不做的层,用来保存数据,通常用在保存输入、残差学习中
                nn.Conv2d(nc, 1, 1, 1, 0, bias=False)
            ) for nc in n_channels_list
        ])

        for param in self.parameters():
            param.requires_grad = False # 固定住权重,不参与训练

get_state_dict的实现如下:

def get_state_dict(net_type: str = 'alex', version: str = '0.1'):
    # build url
    url = 'https://github.com/richzhang/PerceptualSimilarity/' \
        + f'tree/master/models/weights/v{version}/{net_type}.pth'

    # download
    old_state_dict = torch.hub.load_state_dict_from_url(
        url, progress=True,
        map_location=None if torch.cuda.is_available() else torch.device('cpu')
    ) # 从作者的github开源仓库中获取权重


    # rename keys
    new_state_dict = OrderedDict()
    for key, val in old_state_dict.items():
        new_key = key
        new_key = new_key.replace('lin', '')
        new_key = new_key.replace('model.', '')
        new_state_dict[new_key] = val

    return new_state_dict
    
'''
LinLayers(
  (0): Sequential(
    (0): Identity()
    (1): Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
  )
  (1): Sequential(
    (0): Identity()
    (1): Conv2d(192, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
  )
  (2): Sequential(
    (0): Identity()
    (1): Conv2d(384, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
  )
  (3): Sequential(
    (0): Identity()
    (1): Conv2d(256, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
  )
  (4): Sequential(
    (0): Identity()
    (1): Conv2d(256, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
  )
)
'''

你可能感兴趣的:([论文笔记]The Unreasonable Effectiveness of Deep Features as a Perceptual Metric)