Deconvolution/upsampling convolution/transposed convolution, are referred to same thing:
The upsampling operation on feature map, it is not real deconvolution because it can not restore the previous value, but only the shape, formally it should be called transposed convolution. Some tasks, e.g. semantic segmentation, needs the output of net and original image have same size.
To restore the shape needs some tricks, refer to this blog.
Transposed convolution is also applied in DCGAN
In different deep learning library, calculation of transposed convolution has different style, take tf.contrib.layers.conv2d_transpose() for example
inputs: A 4-D Tensor of type float and shape [batch, height, width, in_channels] for NHWC data format or [batch, in_channels, height, width] for NCHW data format.
num_outputs: integer, the number of output filters.
kernel_size: a list of length 2 holding the [kernel_height, kernel_width] of of the filters. Can be an int if both values are the same.
stride: a list of length 2: [stride_height, stride_width]. Can be an int if both strides are the same. Note that presently both strides must have the same value.
padding: one of 'VALID' or 'SAME'.
data_format: A string. NHWC (default) and NCHW are supported.
activation_fn: activation function, set to None to skip it and maintain a linear activation.
......
事实上在反卷积的时候,只需要遵循卷积操作时候的输入输出关系即可。
比如在卷积操作中
same padding下
newheight=ceil(heightstride) n e w h e i g h t = c e i l ( h e i g h t s t r i d e )
valid padding 下
newheight=ceil(height–kernel+1stride) n e w h e i g h t = c e i l ( h e i g h t – k e r n e l + 1 s t r i d e )
举个例子
#假设inputs的size是batch*3×3*512
transpose1=tf.contrib.layers.conv2d_transpose(inputs=inputs,\
num_outputs=256,\
kernel_size=4,strides=(2,2),padding='SAME')
#经过第一次反卷积,transpose1的size是batch*6*6*256,这符合之前描述的规则
transpose2=tf.contrib.layers.conv2d_transpose(inputs=transpose1,\
num_outputs=128,\
kernel_size=4,strides=(2,2),padding='SAME')
#经过第二次反卷积,transpose2的size是batch*12*12*128,这符合之前描述的规则
transpose3=tf.contrib.layers.conv2d_transpose(inputs=transpose2,\
num_outputs=64,\
kernel_size=4,strides=(2,2),padding='SAME')
#经过第二次反卷积,transpose3的size是batch*24*24*64,这符合之前描述的规则
#如果我们要用mnist数据集做实验,图片大小是28*28,其channel是1
#那么最后一步我们使用valid padding
transpose4=tf.contrib.layers.conv2d_transpose(inputs=transpose3,\
num_outputs=1,\
kernel_size=5,strides=(1,1),padding='VALID')
#满足valid padding下的24=ceil( (28-5+1)/1 )
这就是“反卷积“的基本操作了,我的博客有具体代码DCGAN。