Vision Layers
1. 卷积层(ConvolutionLayer)
2. 池化层(PoolingLayer)
Activation / Neuron Layers
1. ReLU(Rectified-Linear and Leaky)
Loss Layers
Utility Layers
1. Slice
将bottom按照需要分解成多个tops。即在某一个维度,按照给定的下标,blob拆分成几块。比如要拆分channel,总数50,下标为10,30,40,那就是分成4份,输出4个layer。假设input的维度是N*50*H*W
,tops输出的维度分别为N*10*H*W
, N*20*H*W
, N*10*H*W
, N*10*H*W
,使用示例:
layer {
name: "slice"
type: "Slice"
bottom: "input"
top: "output1"
top: "output2"
top: "output3"
top: "output4"
slice_param {
axis: 1 #要进行分解的维度
slice_point: 10
slice_point: 20
slice_point: 30
slice_point: 40
}
}
这里需要注意的是,如果有slice_point,slice_point的个数一定要等于top个数减1。 slice_point的作用是将axis按照slic_point 进行分解,没有设置时则对axis进行均匀分解。可设参数如下:
message SliceParameter {
// The axis along which to slice -- may be negative to index from the end
// (e.g., -1 for the last axis).
// By default, SliceLayer concatenates blobs along the "channels" axis (1).
optional int32 axis = 3 [default = 1];
repeated uint32 slice_point = 2;
// DEPRECATED: alias for "axis" -- does not support negative indexing.
optional uint32 slice_dim = 1 [default = 1];
}
2. Concat
在某个维度,将输入的layer拼接起来,是slice的逆过程。使用示例:
layer {
name: "data_all"
type: "Concat"
bottom: "data_classfier"
bottom: "data_boundingbox"
bottom: "data_facialpoints"
top: "data_all"
concat_param {
axis: 1 # 拼接的维度,默认1,0表示沿batch(即num)拼接
}
}
参数定义:
message ConcatParameter {
// The axis along which to concatenate -- may be negative to index from the
// end (e.g., -1 for the last axis). Other axes must have the
// same dimension for all the bottom blobs.
// By default, ConcatLayer concatenates blobs along the "channels" axis (1).
optional int32 axis = 2 [default = 1];
// DEPRECATED: alias for "axis" -- does not support negative indexing.
optional uint32 concat_dim = 1 [default = 1];
}
3. Split
将blob复制几份,分别给不同的layer,这些上层layer共享这个blob。无需参数。
4. Tile
将blob的某个维度,扩大n倍。比如原来是1234,扩大两倍变成11223344。
5. Reduction
使用sum或mean等操作作用于输入blob按照给定参数规定的维度。(通俗的讲就是将输入的特征图按照给定的维度进行求和或求平均)。用法示例:
layer {
name: "o_pseudoloss"
type: "Reduction"
bottom: "o"
top: "o_pseudoloss"
loss_weight: 1
reduction_param{
axis: 0//default = 0
optional ReductionOp operation = SUM //default = SUM
}
}
参数
// Message that stores parameters used by ReductionLayer
message ReductionParameter {
enum ReductionOp {
SUM = 1;
ASUM = 2;
SUMSQ = 3;
MEAN = 4;
}
optional ReductionOp operation = 1 [default = SUM]; // reduction operation
// The first axis to reduce to a scalar -- may be negative to index from the
// end (e.g., -1 for the last axis).
// (Currently, only reduction along ALL "tail" axes is supported; reduction
// of axis M through N, where N < num_axes - 1, is unsupported.)
// Suppose we have an n-axis bottom Blob with shape:
// (d0, d1, d2, ..., d(m-1), dm, d(m+1), ..., d(n-1)).
// If axis == m, the output Blob will have shape
// (d0, d1, d2, ..., d(m-1)),
// and the ReductionOp operation is performed (d0 * d1 * d2 * ... * d(m-1))
// times, each including (dm * d(m+1) * ... * d(n-1)) individual data.
// If axis == 0 (the default), the output Blob always has the empty shape
// (count 1), performing reduction across the entire input --
// often useful for creating new loss functions.
optional int32 axis = 2 [default = 0];
optional float coeff = 3 [default = 1.0]; // coefficient for output
}
6. Reshape
这个很简单,就是matlab里的reshape。
7. Eltwise
按元素对输入执行相应的操作运算:如相加,相乘,取最大等。因此输入必须具有相同的尺寸和维度。
layer
{
name: "eltwise_layer"
type: "Eltwise"
bottom: "A"
bottom: "B"
top: "diff"
eltwise_param {
operation: SUM
coeff: 1 # 执行A+(-B)=A-B的操作,默认不写时执行A+B
coeff: -1
}
}
注意:参数coeff只对SUM操作有效,用于控制相加时的权值
参数:
message EltwiseParameter {
enum EltwiseOp {
PROD = 0;
SUM = 1;
MAX = 2;
}
optional EltwiseOp operation = 1 [default = SUM]; // element-wise operation
repeated float coeff = 2; // blob-wise coefficient for SUM operation
// Whether to use an asymptotically slower (for >2 inputs) but stabler method
// of computing the gradient for the PROD operation. (No effect for SUM op.)
optional bool stable_prod_grad = 3 [default = true];
}
8. Flatten
把一个输入的大小为n * c * h * w变成一个简单的向量,其大小为 n * (chw)。可以用reshape代替~,相当于第一维不变,后面的自动计算。
参数定义:
/// Message that stores parameters used by FlattenLayer
message FlattenParameter {
// The first axis to flatten: all preceding axes are retained in the output.
// May be negative to index from the end (e.g., -1 for the last axis).
optional int32 axis = 1 [default = 1];
// The last axis to flatten: all following axes are retained in the output.
// May be negative to index from the end (e.g., the default -1 for the last
// axis).
optional int32 end_axis = 2 [default = -1];
}
参考
http://caffe.berkeleyvision.org/tutorial/
https://blog.csdn.net/chenzhi1992/article/details/52837462
https://www.jianshu.com/p/0ade01e9e48a