TFlearn中用于残差网络的Residual Block和Residual Bottleneck

Residual Block

tflearn.layers.conv.residual_block (incoming, nb_blocks, out_channels, downsample=False, downsample_strides=2, activation=‘relu’, batch_norm=True, bias=True, weights_init=‘variance_scaling’, bias_init=‘zeros’, regularizer=‘L2’, weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name=‘ResidualBlock’)

A residual block as described in MSRA’s Deep Residual Network paper. Full pre-activation architecture is used here.

Input
4-D Tensor [batch, height, width, in_channels].

Output
4-D Tensor [batch, new height, new width, nb_filter].

Arguments
incoming: Tensor. Incoming 4-D Layer.
nb_blocks: int. Number of layer blocks.
out_channels: int. The number of convolutional filters of the convolution layers.
downsample: bool. If True, apply downsampling using ‘downsample_strides’ for strides.
downsample_strides: int. The strides to use when downsampling.
activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: ‘linear’.
batch_norm: bool. If True, apply batch normalization.
bias: bool. If True, a bias is used.
weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: ‘uniform_scaling’.
bias_init: str (name) or tf.Tensor. Bias initialization. (see tflearn.initializations) Default: ‘zeros’.
regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
weight_decay: float. Regularizer decay parameter. Default: 0.001.
trainable: bool. If True, weights will be trainable.
restore: bool. If True, this layer weights will be restored when loading a model.
reuse: bool. If True and ‘scope’ is provided, this layer variables will be reused (shared).
scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
name: A name for this layer (optional). Default: ‘ShallowBottleneck’.
References
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.

Residual Bottleneck
tflearn.layers.conv.residual_bottleneck (incoming, nb_blocks, bottleneck_size, out_channels, downsample=False, downsample_strides=2, activation=‘relu’, batch_norm=True, bias=True, weights_init=‘variance_scaling’, bias_init=‘zeros’, regularizer=‘L2’, weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name=‘ResidualBottleneck’)

A residual bottleneck block as described in MSRA’s Deep Residual Network paper. Full pre-activation architecture is used here.

Input
4-D Tensor [batch, height, width, in_channels].

Output
4-D Tensor [batch, new height, new width, nb_filter].

Arguments
incoming: Tensor. Incoming 4-D Layer.
nb_blocks: int. Number of layer blocks.
bottleneck_size: int. The number of convolutional filter of the bottleneck convolutional layer.
out_channels: int. The number of convolutional filters of the layers surrounding the bottleneck layer.
downsample: bool. If True, apply downsampling using ‘downsample_strides’ for strides.
downsample_strides: int. The strides to use when downsampling.
activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: ‘linear’.
batch_norm: bool. If True, apply batch normalization.
bias: bool. If True, a bias is used.
weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: ‘uniform_scaling’.
bias_init: str (name) or tf.Tensor. Bias initialization. (see tflearn.initializations) Default: ‘zeros’.
regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
weight_decay: float. Regularizer decay parameter. Default: 0.001.
trainable: bool. If True, weights will be trainable.
restore: bool. If True, this layer weights will be restored when loading a model.
reuse: bool. If True and ‘scope’ is provided, this layer variables will be reused (shared).
scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
name: A name for this layer (optional). Default: ‘DeepBottleneck’.
References
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Links
http://arxiv.org/pdf/1512.03385v1.pdf
Identity Mappings in Deep Residual Networks

你可能感兴趣的:(TFlearn中用于残差网络的Residual Block和Residual Bottleneck)