caffe学习(1)

1.卷积层参数设置,详见./models/bvlc_reference_caffenet/train_val.prototxt

layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" #滤波器学习速率和衰减速率设置 param { lr_mult: 1 decay_mult: 1 }
  #偏置学习速率和衰减速率设置
  param { lr_mult: 2 decay_mult: 0 }
  convolution_param { num_output: 96 # 卷积核数目 kernel_size: 11 # 卷积和大小11x11 stride: 4 # 滤波器移动步长,通常为1 weight_filler { type: "gaussian" #以高斯分布初始化滤波器参数 std: 0.01 # 高斯分布的标准差为0.01,默认为0 }
    bias_filler { type: "constant" # 以0初始化偏置 value: 0 }
  }
}

2.池采样层

layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3 # pool over a 3x3 region
    stride: 2      # 步长为2,通常步长等于核大小
  }
}

3.loss层
Softmax
Layer type: SoftmaxWithLoss

Sum-of-Squares / Euclidean
Layer type: EuclideanLoss
欧几里德损失层计算

4.激活函数
ReLU / Rectified-Linear and Leaky-ReLU
ReLU(x)=max(0,x)

layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" }

Sigmoid
S(x)=1/(1+exp(-x))

layer { name: "encode1neuron" bottom: "encode1" top: "encode1neuron" type: "Sigmoid" }

5.全连接层Inner Product

layer { name: "fc8" type: "InnerProduct" # learning rate and decay multipliers for the weights param { lr_mult: 1 decay_mult: 1 }
  # learning rate and decay multipliers for the biases
  param { lr_mult: 2 decay_mult: 0 }
  inner_product_param { num_output: 1000 #全连接层神经元节点数 weight_filler { type: "gaussian" std: 0.01 }
    bias_filler { type: "constant" value: 0 }
  }
  bottom: "fc7"
  top: "fc8"
}

你可能感兴趣的:(caffe)