https://github.com/Robert-JunWang/Pelee 项目地址
这个Pelee比MobilenetSSD还要强一点点.
以上是作者的对比结果.
S1,把项目克隆下来,放到caffe/example目录下.
S2.下载预训练模型
红线处即是模型,.caffemodel文件
S2.打开train_voc.py,里面生成训练所需的所有prototxt.需要根据自己的实际情况进行更改.
model_meta = { # 一个方法集字典,
'pelee':Pelee,
'ssd':VGG_SSD,
'run':VGG_RUN
}
# 预训练模型地址,尽量用绝对地址,因为这个脚本是在caffe/目录下运行的
Default_weights_file = "models/peleenet_inet_acc7243.caffemodel"
parser = argparse.ArgumentParser(description='Pelee Training')
# 使用方法集的哪个方法,把default设为'pelee'
parser.add_argument('--arch', '-a', metavar='ARCH', default='pelee',
choices=model_meta.keys(),
help='model architecture: ' +
' | '.join(model_meta.keys()) +
' (default: pelee)')
# 设置初始学习率,也就是lr_base;设置'weight-decay',正则项,防止过拟合,0.0005就行;batch-size就不说了,已显存不爆为参考;
parser.add_argument('--lr', '--learning-rate', default=0.001, type=float,
metavar='LR', help='initial learning rate (default: 0.005)')
parser.add_argument('--weight-decay', '--wd', default=0.0005, type=float,
metavar='W', help='weight decay (default: 5e-4)')
parser.add_argument('-b', '--batch-size', default=32, type=int,
metavar='N', help='mini-batch size (default: 32)')
# 这个kernel-size还是别改了,默认是1
parser.add_argument('-k', '--kernel-size', default=1, type=int,
metavar='K', help='kernel size for CreateMultiBoxHead (default: 1)')
# 输入图像大小,默认为304
parser.add_argument('--image-size', default=304, type=int,
metavar='IMAGE_SIZE', help='image size (default: 304)')
# 这个weights就是前面刚写的模型
parser.add_argument('--weights', default=Default_weights_file, type=str,
metavar='WEIGHTS', help='initial weights file (default: {})'.format(Default_weights_file))
# 生成完.prototxt是否直接训练,
parser.add_argument('--run-later', dest='run_soon', action='store_false',
help='start training later after generating all files')
# 学习率改变的节点,如20000iter处降一次,80000iter时降一次..
parser.add_argument('--step-value', '-s', nargs='+', type=int, default=[20000, 80000, 120000],
metavar='S', help='step value (default: [8000, 10000, 12000])')
parser.add_argument('--posfix', '-p', metavar='POSFIX', default='', type=str)
parser.set_defaults(run_soon=True) # 默认生成完.prototxt是否直接训练
# The directory which contains the caffe code.
# We assume you are running the script at the CAFFE_ROOT.
caffe_root = os.getcwd()
# Set true if you want to start training right after generating all files.
# run_soon = True
run_soon = args.run_soon
# Set true if you want to load from most recently saved snapshot.
# Otherwise, we will load from the pretrain_model defined below.
resume_training = True
# If true, Remove old model files.
remove_old_models = False
pretrain_model = args.weights
# The database file for training data. Created by data/VOC0712/create_data.sh
train_data = "/home/mydataset/mydataset_trainval_lmdb" # 你自己的数据地址
# The database file for testing data. Created by data/VOC0712/create_data.sh
test_data = "/home/mydataset/mydataset_test_lmdb"
# Specify the batch sampler.
resize_width = args.image_size # 看到这你会明白,图像的W,H其实可以不同,在这自己设置也可以
resize_height = args.image_size
resize = "{}x{}".format(resize_width, resize_height)
在接下来的网络中,你可以自己设置下各通道的均值,未完待续