论文阅读笔记之——《Scale-recurrent Network for Deep Image Deblurring》

论文链接:https://arxiv.org/pdf/1802.01770.pdf

代码:https://github.com/jiangsutx/SRN-Deblur

在本论文中,我们探索了一种用于多尺度图像去模糊的更有效的网络结构。我们提出了一种新的尺度循环网络(SRN:scale-recurrent network)(we explore a more effective network structure for multi-scale image deblurring. We propose the new scale-recurrent network (SRN))

Scale-recurrent Structure

在现有的多尺度方法中,求解器及其每个尺度的参数通常是一样的。直观上看,这是一种自然的选择,因为在每个尺度上,我们的目标都是求解同样的问题。还可以发现,每个尺度上使用不同的参数可能会引入不稳定性并带来非限制性解空间的额外问题。另一个问题是输入图像可能会有不同的分辨率和运动尺度。如果允许每个尺度上都进行参数调节,那么这个解可能会在特定图像分辨率或运动尺度上过拟合。(In well-established multiscale methods, the solver and its parameters at each scale are usually the same. This is intuitively a natural choice since in each scale we aim to solve the same problem. It was also found that varying parameters at each scale could introduce instability and cause the extra problems of unrestrictive solution space. Another concern is that input images may have different resolutions and motion scales. If parameter tweaking in each scale is allowed, the solution may overfit to a specific image resolution or motion scale.)

我们提出在不同尺度上共享网络权重,从而显著降低训练复杂度以及引入明显的稳定性优势。这种做法有两种好处。首先,这能显著减少可训练参数的数量。即使用同样数目的训练数据,在共享权重的循环利用下的效果也像是有多倍数据来学习参数,这实际上相当于在尺度上进行的数据增强。其次,我们提出的结构可以利用到循环模块,其状态传递能隐含地获取各个尺度上的有用信息并帮助图像恢复。(In this work, we propose sharing network weights across scales to significantly reduce training difficulty and introduce obvious stability benefits. The advantages are twofold. First, it reduces the number of trainable parameters significantly. Even with the same training data, the recurrent exploitation of shared weights works in a way similar to using data multiple times to learn parameters, which actually amounts to data augmentation regarding scales. Second, our proposed structure can incorporate recurrent modules, the hidden state of which implicitly captures useful information and benefits restoration across scales.)

Encoder-decoder ResBlock Network

Also inspired by recent success of encoder-decoder structure for various computer vision tasks, we explore the effective
way to adapt it for the task of image deblurring。在本论文中,我们将表明直接应用已有的编码器-解码器结构不能得到最优结果。相对而言,我们的编码器-解码器 ResBlock 网络会放大各种 CNN 结构的优势并实现训练的可行性。同时,这还会产生非常大的感受野,这对运动模糊很大的图像的去模糊至关重要。(In this paper, we will show that directly applying an existing encoder-decoder structure cannot produce optimal results. Our Encoder-decoder ResBlock network, on the contrary, amplifies the merit of various CNN structures and yields the feasibility for training. It also produces a very large receptive field, which is of vital importance for large-motion deblurring.)

论文阅读笔记之——《Scale-recurrent Network for Deep Image Deblurring》_第1张图片

下面为网络结构:

我们将我们提出的网络的整体架构称为 SRN-DeblurNet,如图 3 所示。其以在不同尺度上从输入图像下采样的一个模糊图像序列为输入,然后得到一组对应的锐利图像。在全分辨率下的锐利图像即为最终输出。

论文阅读笔记之——《Scale-recurrent Network for Deep Image Deblurring》_第2张图片

上面的网络backbone跟前面两篇博客介绍的一样《论文阅读笔记之——《Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring》》和《论文阅读笔记之——《Deep Stacked Hierarchical Multi-patch Network for Image Deblurring》》

然后里面也用了encoder和decoder那岂不是跟有跟《Deep Stacked Hierarchical Multi-patch Network for Image Deblurring》这篇论文非常像~~~~只是一篇展开了结构,一篇没有展开~~~套路满满哎

 

 

参考资料:

https://baijiahao.baidu.com/s?id=1601883505684504560&wfr=spider&for=pc

https://blog.csdn.net/ZHANG2012LIANG/article/details/83039732

https://www.jianshu.com/p/d12dc17baf57

你可能感兴趣的:(图像处理,深度学习,卷积神经网络,图像复原,deblur)