Caffe训练和测试自己的数据集

学习caffe后跑了自带的例子是不是感觉很不过瘾,学习caffe的目的不是简单做几个练习,而是要用到自己的实际项目或者科研中,所以本文介绍如何从自己的原始图片到lmdb数据,再到训练和测试模型的整个流程。


详细代码可以去我的github上https://github.com/EddyGao/Caffe-taobao_Image-Identification参考,下载myfile4放到你的caffe/examples/下面,执行demo.sh即可。

一.准备数据

1)我们借用网上某童鞋的数据集,来自于淘宝10个商品类1000张图片,每个类100张,当然你也可以根据自己的需要搜集自己想要识别的图片集

下载地址,百度网盘:https://pan.baidu.com/s/1o77w1wI

下载后将文件解压。

2)在caffe/examples文件夹下新建文件夹myfile4,并在myfile4中新建文件夹data用于存放我们的数据集,将下载的数据集中train和val文件复制到data中即

caffe/examples/myfile4/data/train

caffe/examples/myfile4/data/val

二.转换为lmdb格式

1)在myfile4文件夹中新建create_filelist.sh

#!/usr/bin/env sh
DATA=examples/myfile4/data
MY=examples/myfile4/data

echo "Create train.txt..."
rm -rf $MY/train.txt

find $DATA/train -name 15001*.jpg | cut -d '/' -f4-5 | sed "s/$/ 0/">>$MY/train.txt
find $DATA/train -name 15059*.jpg | cut -d '/' -f4-5 | sed "s/$/ 1/">>$MY/train.txt
find $DATA/train -name 62047*.jpg | cut -d '/' -f4-5 | sed "s/$/ 2/">>$MY/train.txt
find $DATA/train -name 68021*.jpg | cut -d '/' -f4-5 | sed "s/$/ 3/">>$MY/train.txt
find $DATA/train -name 73018*.jpg | cut -d '/' -f4-5 | sed "s/$/ 4/">>$MY/train.txt
find $DATA/train -name 73063*.jpg | cut -d '/' -f4-5 | sed "s/$/ 5/">>$MY/train.txt
find $DATA/train -name 80012*.jpg | cut -d '/' -f4-5 | sed "s/$/ 6/">>$MY/train.txt
find $DATA/train -name 92002*.jpg | cut -d '/' -f4-5 | sed "s/$/ 7/">>$MY/train.txt
find $DATA/train -name 92017*.jpg | cut -d '/' -f4-5 | sed "s/$/ 8/">>$MY/train.txt
find $DATA/train -name 95005*.jpg | cut -d '/' -f4-5 | sed "s/$/ 9/">>$MY/train.txt

echo "Create test.txt..."
rm -rf $MY/val.txt

find $DATA/val -name 15001*.jpg | cut -d '/' -f4-5 | sed "s/$/ 0/">>$MY/val.txt
find $DATA/val -name 15059*.jpg | cut -d '/' -f4-5 | sed "s/$/ 1/">>$MY/val.txt
find $DATA/val -name 62047*.jpg | cut -d '/' -f4-5 | sed "s/$/ 2/">>$MY/val.txt
find $DATA/val -name 68021*.jpg | cut -d '/' -f4-5 | sed "s/$/ 3/">>$MY/val.txt
find $DATA/val -name 73018*.jpg | cut -d '/' -f4-5 | sed "s/$/ 4/">>$MY/val.txt
find $DATA/val -name 73063*.jpg | cut -d '/' -f4-5 | sed "s/$/ 5/">>$MY/val.txt
find $DATA/val -name 80012*.jpg | cut -d '/' -f4-5 | sed "s/$/ 6/">>$MY/val.txt
find $DATA/val -name 92002*.jpg | cut -d '/' -f4-5 | sed "s/$/ 7/">>$MY/val.txt
find $DATA/val -name 92017*.jpg | cut -d '/' -f4-5 | sed "s/$/ 8/">>$MY/val.txt
find $DATA/val -name 95005*.jpg | cut -d '/' -f4-5 | sed "s/$/ 9/">>$MY/val.txt

echo "All done"
在shell中运行此脚本,在caffe根目录下执行

# sh examples/myfile4/create_filelist.sh
执行完成后你会发现在examples/myfile4/data文件夹中生成train.txt和val.txt文件

2)在myfile4文件夹中新建create_lmdb.sh文件

#!/usr/bin/env sh
MY=examples/myfile4

TRAIN_DATA_ROOT=/home/ghz/caffe/examples/myfile4/data/
VAL_DATA_ROOT=/home/ghz/caffe/examples/myfile4/data/

echo "Create train lmdb.."
rm -rf $MY/img_train_lmdb
build/tools/convert_imageset \
--shuffle \
--resize_height=32 \
--resize_width=32 \
$TRAIN_DATA_ROOT \
$MY/data/train.txt \
$MY/img_train_lmdb

echo "Create test lmdb.."
rm -rf $MY/img_val_lmdb
build/tools/convert_imageset \
--shuffle \
--resize_height=32 \
--resize_width=32 \
$VAL_DATA_ROOT \
$MY/data/val.txt \
$MY/img_val_lmdb

echo "All Done.."
在shell中运行此脚本,在caffe根目录下执行

# sh examples/myfile4/create_lmdb.sh
执行后会在myfile4文件夹下生成img_train_lmdb文件夹和img_val_lmdb文件夹

三.计算均值并保存

myfile4中新建文件create_meanfile.sh

EXAMPLE=examples/myfile4
DATA=examples/myfile4
TOOLS=build/tools

$TOOLS/compute_image_mean $EXAMPLE/img_train_lmdb $DATA/mean.binaryproto

echo "Done."
生成mean.binaryproto文件

四.创建模型并编写配置文件

在myfile4中创建myfile4_train_test.prototxt文件

name: "myfile4"
layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    mean_file: "examples/myfile4/mean.binaryproto"
  }
  data_param {
    source: "examples/myfile4/img_train_lmdb"
    batch_size: 50
    backend: LMDB
  }
}
layer {
  name: "cifar"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    mean_file: "examples/myfile4/mean.binaryproto"
  }
  data_param {
    source: "examples/myfile4/img_val_lmdb"
    batch_size: 50
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.0001
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "pool1"
  top: "pool1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 64
    pad: 2
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "conv3"
  top: "pool3"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool3"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 64
    weight_filler {
      type: "gaussian"
      std: 0.1
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "gaussian"
      std: 0.1
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
创建myfile4_solver.prototxt文件

net: "examples/myfile4/myfile4_train_test.prototxt"
test_iter: 2
test_interval: 50
base_lr: 0.001
lr_policy: "step"
gamma: 0.1
stepsize: 400
momentum: 0.9
weight_decay: 0.004
display: 10
max_iter: 2000
snapshot: 2000
snapshot_prefix: "examples/myfile4/my"
solver_mode: CPU
五.训练和测试

在caffe根目录下执行

# build/tools/caffe train -solver examples/myfile4/myfile4_solver.prototxt
我们执行了2000此迭代并在迭代最后保存了模型,如果没有错误训练完成后会在文件夹下出现my_iter_2000.caffemodel文件

六.用训练好的模型进行分类

1)在myfile4中新建文件synset_words.txt

biao
fajia
kuzi
xiangzi
yizi
dianshi
suannai
xiangshui
hufupin
xiezi
2)在myfile4中新建文件deploy.prototxt

name: "myfile4"
layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param{shape: {dim:1 dim:3 dim:32 dim:32}}
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "pool1"
  top: "pool1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 32
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 64
    pad: 2
    kernel_size: 5
    stride: 1
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "conv3"
  top: "pool3"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool3"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 64
  }
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
  }
}
layer {
  name: "prob"
  type: "Softmax"
  bottom: "ip2"
  top: "prob"
}

3)在myfile4中新建文件夹images,其中放入你想要分类的图片,比如我的是images/111.jpg,测试图片可以来自于我们下载的图像集中

4)在myfile4中新建文件demo.sh

./build/examples/cpp_classification/classification.bin examples/myfile4/deploy.prototxt examples/myfile4/my_iter_2000.caffemodel examples/myfile4/mean.binaryproto examples/myfile4/synset_words.txt examples/myfile4/images/111.jpg
在caffe根目录下执行demo.sh

# sh examples/myfile4/demo.sh
执行后会在shell显示识别的结果

至此,我们完成了Caffe训练和测试自己的数据集,如有问题欢迎交流。


你可能感兴趣的:(caffe学习,caffe,训练,测试)