一、conv1d
在NLP领域,甚至图像处理的时候,我们可能会用到一维卷积(conv1d)。所谓的一维卷积可以看作是二维卷积(conv2d)的简化,二维卷积是将一个特征图在width和height两个方向上进行滑窗操作,对应位置进行相乘并求和;而一维卷积则是只在width或者说height方向上进行滑窗并相乘求和。
以下是Tensor Flow中关于tf.nn.conv1d的API注解:
Computes a 1-D convolution given 3-D input and filter tensors.
Given an input tensor of shape
[batch, in_width, in_channels]
if data_format is "NHWC", or
[batch, in_channels, in_width]
if data_format is "NCHW",
and a filter / kernel tensor of shape
[filter_width, in_channels, out_channels], this op reshapes
the arguments to pass them to conv2d to perform the equivalent
convolution operation.
Internally, this op reshapes the input tensors and invokes `tf.nn.conv2d`.
For example, if `data_format` does not start with "NC", a tensor of shape
[batch, in_width, in_channels]
is reshaped to
[batch, 1, in_width, in_channels],
and the filter is reshaped to
[1, filter_width, in_channels, out_channels].
The result is then reshaped back to
[batch, out_width, out_channels]
\(where out_width is a function of the stride and padding as in conv2d\) and
returned to the caller.
Args:
value: A 3D `Tensor`. Must be of type `float32` or `float64`.
filters: A 3D `Tensor`. Must have the same type as `input`.
stride: An `integer`. The number of entries by which
the filter is moved right at each step.
padding: 'SAME' or 'VALID'
use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.
data_format: An optional `string` from `"NHWC", "NCHW"`. Defaults
to `"NHWC"`, the data is stored in the order of
[batch, in_width, in_channels]. The `"NCHW"` format stores
data as [batch, in_channels, in_width].
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as input.
Raises:
ValueError: if `data_format` is invalid.
二、详解
conv1d的参数含义:(以NHWC格式为例,即,通道维在最后)
1、value:在注释中,value的格式为:[batch, in_width, in_channels],batch为样本维,表示多少个样本,in_width为宽度维,表示样本的宽度,in_channels维通道维,表示样本有多少个通道。
事实上,也可以把格式看作如下:[batch, 行数, 列数],把每一个样本看作一个平铺开的二维数组。这样的话可以方便理解。
2、filters:在注释中,filters的格式为:[filter_width, in_channels, out_channels]。按照value的第二种看法,filter_width可以看作每次与value进行卷积的行数,in_channels表示value一共有多少列(与value中的in_channels相对应)。out_channels表示输出通道,可以理解为一共有多少个卷积核,即卷积核的数目。
3、stride:一个整数,表示步长,每次(向下)移动的距离(TensorFlow中解释是向右移动的距离,这里可以看作向下移动的距离)。
4、padding:同conv2d,value是否需要在下方填补0。
5、name:名称。可省略。
三、代码示例:
conv1d简单使用的代码如下:
import tensorflow as tf
import numpy as np
# 定义一个矩阵a,表示需要被卷积的矩阵。
a = np.array(np.arange(1, 1 + 20).reshape([1, 10, 2]), dtype=np.float32)
# 卷积核,此处卷积核的数目为1
kernel = np.array(np.arange(1, 1 + 4), dtype=np.float32).reshape([2, 2, 1])
# 进行conv1d卷积
conv1d = tf.nn.conv1d(a, kernel, 1, 'VALID')
with tf.Session() as sess:
# 初始化
tf.global_variables_initializer().run()
# 输出卷积值
print(sess.run(conv1d))
结果如下:
[[[ 30.]
[ 50.]
[ 70.]
[ 90.]
[110.]
[130.]
[150.]
[170.]
[190.]]]