senet <-incepetion,resnet<-vgg

Deeper architectures

https://arxiv.org/pdf/1709.01507.pdf

关键词 learning ,representation

  • VGGNets [11] and Inception models [5] showed that increasing the depth of a network could significantly increase the quality of representations that it was capable of learning.
  • By regulating the distribution of the inputs to each layer, Batch Normalization (BN) [6] added stability to the learning process in deep networks and produced smoother optimisation surfaces [12].
  • Building on these works, ResNets demonstrated that it was possible to learn considerably deeper and stronger networks through the use of identity-based skip connections [13], [14].

An alternative, but closely related line of research has focused on methods to improve the functional form of the computational elements contained within a network.

  • Grouped convolutions have proven to be a popular approach for increasing the cardinality of learned transforma- tions [18], [19].
  • More flexible compositions of operators can be achieved with multi-branch convolutions [5], [6], [20], [21], which can be viewed as a natural extension of the grouping operator.

你可能感兴趣的:(senet <-incepetion,resnet<-vgg)