使用官网例程训练LeNet。
Training LeNet on MNIST with Caffe
Caffe程序的运行要注意需命令行要在Caffe的根目录下。
cd $CAFFE_ROOT
./data/mnist/get_mnist.sh
./examples/mnist/create_mnist.sh
依次运行,会在caffe\examples\mnist下得到两个目录mnist_train_lmdb, 和 mnist_test_lmdb,作为训练和测试集。
Caffe上的LeNet并不是传统的LeNet-5,在参数上还是有所不同的。以\caffe\examples\mnist\lenet_train_test.prototxt 为例(本地文件与官网上的教程也有所区别),介绍一下如何定义网络。首先是定义网络名称:
name: "LeNet"
利用我们已经生成的MNIST数据,把数据输入到网络:
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
具体来说,本层名为:”mnist”,类型为”data”,输出到两个blob:”data”“label”。下面用到了之前所说的include,包含TRAIN与TEST,表示该层是在训练还是测试时调用,其区别在于输入的不同数据集(见data_param)。transform_param用于输入数据的缩放,使之在 [0,1] 内,其中 0.00390625=1/256 。
本网络中,有两个卷积层,第一层为:
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
这一层使用数据层的数据作为输入,生成”conv1”层,具体产生20个通道的输出,卷积核大小为5,卷积步长为1。先后两个lr_mults是对本层可学习参数的速率调整,权重学习率与solver中的学习率相同,而偏置学习率为其两倍,这往往导致更好的手链率。权重初始化使用”xavier”方式,偏置初始化为0。
第二个卷积层在池化层1后,输出到池化层2,参数除了输出个数(num_output)改为50,其余的相同。
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
第一个池化层表示采用最大池化的方法,进行大小为2,步长为2的非重叠池化。第二个池化层与第一个完全相同,其输入为卷积层2,输出到全连接层1.
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
全连接层与卷积层的写法非常相似,ip1层产生500个输出。
在激活层后,还有一个全连接层,用于最后的输出分类,因此有10个输出。
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
ReLU是一个元素操作,因此可以使用原地操作(in-place operations)用于节省空间。其实就是top与bottom的名字相同。当然其他的层不能使用重复的blob名称。
最后是损失层:
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
softmax_loss层同时实现softmax和多项对数损失(这可以节省时间并提高数值稳定性)。输入为预测的输出和label,并且没有输出(向后的输出)。它计算损失函数,并且反向传播相对于ip2的梯度。
这一层是用于在测试中返回准确率使用的:
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
与loss相似,但要注明phase: TEST
。
求解器文件路径为:
$CAFFE_ROOT/examples/mnist/lenet_solver.prototxt:
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: GPU
这些参数见上一篇:caffe学习(8)Solver 配置详解。
简单的话可以直接运行:
cd $CAFFE_ROOT
./examples/mnist/train_lenet.sh
即运行:
./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxt
也就是我们上面的求解器配置文件。
首先出现的是我们打开的solver文件,之后打开网络模型:lenet_train_test.prototxt,初始化网络参数。
I1108 16:08:29.103813 46285 layer_factory.hpp:77] Creating layer mnist
I1108 16:08:29.104310 46285 net.cpp:100] Creating Layer mnist
I1108 16:08:29.104336 46285 net.cpp:408] mnist -> data
I1108 16:08:29.104374 46285 net.cpp:408] mnist -> label
I1108 16:08:29.107558 46328 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
不过仔细看的话会发现初始化了两遍网络,其实是因为我们同时在训练和测试,这两个网络的区别就是测试有”accuracy”层,训练没有。这些信息告诉了层之间的连接、输入、输出关系。结束后正式开始训练:
I1108 16:08:29.156116 46285 net.cpp:283] Network initialization done.
I1108 16:08:29.156206 46285 solver.cpp:60] Solver scaffolding done.
I1108 16:08:29.156466 46285 caffe.cpp:251] Starting Optimization
I1108 16:08:29.156500 46285 solver.cpp:279] Solving LeNet
I1108 16:08:29.156512 46285 solver.cpp:280] Learning Rate Policy: inv
I1108 16:08:29.158172 46285 solver.cpp:337] Iteration 0, Testing net (#0)
I1108 16:08:31.021287 46285 solver.cpp:404] Test net output #0: accuracy = 0.0933
I1108 16:08:31.021385 46285 solver.cpp:404] Test net output #1: loss = 2.36349 (* 1 = 2.36349 loss)
可以看到,初始化参数后测试模型,准确率有9.33%,比10%还低一些。基于参数设置,我们每迭代100次输出loss 信息,每迭代500次测试模型,输出accuracy 信息:
I1108 16:08:46.974346 46285 solver.cpp:337] Iteration 500, Testing net (#0)
I1108 16:08:48.808943 46285 solver.cpp:404] Test net output #0: accuracy = 0.9767
I1108 16:08:48.809048 46285 solver.cpp:404] Test net output #1: loss = 0.068445 (* 1 = 0.068445 loss)
I1108 16:08:48.823623 46285 solver.cpp:228] Iteration 500, loss = 0.0609579
I1108 16:08:48.823714 46285 solver.cpp:244] Train net output #0: loss = 0.0609579 (* 1 = 0.0609579 loss)
I1108 16:08:48.823740 46285 sgd_solver.cpp:106] Iteration 500, lr = 0.0192814
可以发现输出500次后准确率已经达到了97.67%。
达到训练次数后(这里减少了训练次数),得到最终结果:
I1108 16:09:04.727638 46285 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_1000.caffemodel
I1108 16:09:04.754024 46285 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_1000.solverstate
I1108 16:09:04.770093 46285 solver.cpp:317] Iteration 1000, loss = 0.0819535
I1108 16:09:04.770177 46285 solver.cpp:337] Iteration 1000, Testing net (#0)
I1108 16:09:06.607952 46285 solver.cpp:404] Test net output #0: accuracy = 0.9844
I1108 16:09:06.608042 46285 solver.cpp:404] Test net output #1: loss = 0.0491373 (* 1 = 0.0491373 loss)
I1108 16:09:06.608055 46285 solver.cpp:322] Optimization Done.
I1108 16:09:06.608064 46285 caffe.cpp:254] Optimization Done.
得到了两个文件:lenet_iter_1000.caffemodel和lenet_iter_1000.solverstate。