READING NOTE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

TITLE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

AUTHOR: Zifeng Wu, Chunhua Shen, Anton van den Hengel

ASSOCIATION: The University of Adelaide

FROM: arXiv:1611.10080

CONTRIBUTIONS

  1. A further developed intuitive view of ResNets is introduced, which helps to understand their behaviours and find possible directions to further improvements.
  2. A group of relatively shallow convolutional networks is proposed based on our new understanding. Some of them achieve the state-of-the-art results on the ImageNet classification datase.
  3. The impact of using different networks is evaluated on the performance of semantic image segmentation, and these networks, as pre-trained features, can boost existing algorithms a lot.

SUMMARY

READING NOTE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition_第1张图片

For the residual unit i , let yi1 be the input, and let fi() be its trainable non-linear mappings, also named Blok i . The output of unit i is recursively defined as

yi=fi(yi1,ωi)+yi1

where ωi denotes the trainalbe parameters, and fi() is often two or three stacked convolution stages in a ResNet building block. Then top left network can be formulated as

y2=y1+f2(y1,ω2)

=y0+f1(y0,ω1)+f2(y0+f1(y0,ω1),ω2)

Thus, in SGD iteration, the backward gradients are:

Δω2=dfsdω2Δy2

Δy1=Δy2+f2Δy2

Δω1=df1dω1Δy2+df1dω1f2Δy2

Ideally, when effective depth l2 , both terms of Δω1 are non-zeros as the bottom-left case illustrated. However, when effective depth l=1 , the second term goes to zeros, which is illustrated by the bottom-right case. If this case happens, we say that the ResNet is over-deepened, and that it cannot be trained in a fully end-to-end manner, even with those shortcut connections.

To summarize, shortcut connections enable us to train wider and deeper networks. As they growing to some point, we will face the dilemma between width and depth. From that point, going deep, we will actually get a wider network, with extra features which are not completely end-to-end trained; going wider, we will literally get a wider network, without changing its end-to-end characteristic.

The author designed three kinds of network structure as illustrated in the following figure

READING NOTE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition_第2张图片

and the classification performance on ImageNet validation set is shown as below

READING NOTE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition_第3张图片

你可能感兴趣的:(计算机视觉,DL)