使用caffe的NetSpec.py中的Python接口自动生成train.prototxt文件

  • 以生成ResNet18网络为例讲解如何使用Python构建自己的Caffe网络
  • 难点1如何知道函数接口有哪些字段,怎么添加我们需要的字段:各个函数的接口参数都定义在caffe/proto/caffe.proto文件里面,message LayerParameter存放的是Layer层都具有的参数,opentinal是可选参数,repeated可重复的参数,可重复的意思是如果只有一个则直接写,如果有多个写成列表的形式即可。ConvolutionParameter存放的是卷积层特有的参数,也就是说除过Layer层的参数,他额外具有的参数,其他层的接口类似。
n.data, n.label =L.Data(source=lmdb,name='data',include=dict(phase=0), backend=P.Data.LMDB, batch_size=batch_size, ntop=2, transform_param=dict(crop_size=224,mean_file=mean_file,mirror=True,))
  • 难点2函数左值是什么,layer 怎么命名,top和bottom怎么命名:函数接口的含义如下,左值有两个n.datan.label是说明他的top有两个,且top的名字分别为datalabel,注意topbottomblob类型是用来传递数据的,而我们现在创建的都是Layer所以top的名字可以和layer的名字相同,因为不是一个东西,但是层与层的名字不能相同,并且左值是整个网络的累计,如果左值相同则将其覆盖,先写的layer就不会再网络中出现了,在这里我们使用可选字段name指定了这个卷积层的名字,如果不指定那么就以第一个top的名字为默认名字,所以在这指定与不指定他的名字都是datadict是说明这是一个字典类型,最后这一函数生成的proto文件如下。
layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    mirror: true
    crop_size: 224
    mean_file: "/ai/zhaoliang/shiyan_2/mean.binaryproto"
  }
  data_param {
    source: "/ai/zhaoliang/shiyan_2/train_db"
    batch_size: 64
    backend: LMDB
  }
}

n.conv1=L.Convolution(n.data,kernel_size=7, stride=2,num_output=64, pad=3,weight_filler=dict(type='msra'),bias_term=False)
n.bn2a= L.BatchNorm(n.conv2a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
n.scale2a = L.Scale(n.bn2a,scale_param=dict(bias_term=True), in_place=True)
  • 难点3如何生成top和bottom是同一个的layer:上面代码段函数的第一个传递参数是他的bottom是什么,也就是他的数据来源是从哪里来。
    第二个和第三个函数有个特别之处是他们的in_place字段都为true表示他们的数据都是从bottom来到top去也就是proto文件中的topbottom是一样的情况,其生成的prototxt文件如下。
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  convolution_param {
    num_output: 64
    bias_term: false
    pad: 3
    kernel_size: 7
    stride: 2
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "bn_conv1"
  type: "BatchNorm"
  bottom: "conv1"
  top: "conv1"
  batch_norm_param {
    moving_average_fraction: 0.899999976158
  }
}
layer {
  name: "scale_conv1"
  type: "Scale"
  bottom: "conv1"
  top: "conv1"
  scale_param {
    bias_term: true
  }
}

  • 可以看到加了in_place 字段后他的topbottom就是一样的了
  • 难点4:如何生成有多个bottom或者top的layer
    方法有2,第一种方法就是给bottom字段指定一个值或者列表,第二种方法就是从第一个传入参数开始连续的指定bottom,如下图的代码段2,top同理。
  • 注意! Eltwise的bottom是conv即卷积而非banchnormscale,且函数一定是有返回值的,因为返回值是当前网络的累加,那么top字段指定的就是除过左值的top,即为除过左值的top还额外有的top是什么。
n.elt2d=L.Eltwise(n.conv2b,bottom='conv2c',eltwise_param=dict(operation=1))
n.elt2d=L.Eltwise(n.conv2b,n.conv2c,eltwise_param=dict(operation=1))
  • 补充1: 生成两个相同param,按照列表形式写即可,因为paramrepeated类型,如下面代码段所示。
 n.fc1024_skin=L.InnerProduct(n.poolaa,param=[dict(lr_mult=1,decay_mult=1),dict(lr_mult=1,decay_mult=1)],inner_product_param=dict(num_output=5,weight_filler=dict(type='xavier'),bias_filler=dict(type='constant',value=0)))

  • 补充2:prototxt中有些不是字符串的字段如phase=Train/Test如果我们直接在接口中写phase='Train'会报错,可以试试写成phase=0/1或者L.Train/L.Test 进行实验。
  • 如何检验生成的模型对不对:将生成的prototxt文件复制到网址查看网络结构即可验证。

完整代码如下

# -*- coding: utf-8 -*-
from caffe import layers as L,params as P,to_proto
import caffe
import os
path='/ai/zhaoliang/shiyan_2/'
train_lmdb=path+'train_db'
mean_file=path+'mean.binaryproto'
train_proto=path+'train.prototxt' 

def create_net(lmdb,batch_size,include_acc=False):
    n=caffe.NetSpec()
    
    n.data, n.label = L.Data(source=lmdb,name='data',include=dict(phase=0), backend=P.Data.LMDB, batch_size=batch_size, ntop=2,
        transform_param=dict(crop_size=224,mean_file=mean_file,mirror=True,))
 
    
    #第一层
    n.conv1=L.Convolution(n.data,kernel_size=7, stride=2,num_output=64, pad=3,weight_filler=dict(type='msra'),bias_term=False)
    n.bn_conv1=L.BatchNorm(n.conv1,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)
    n.scale_conv1=L.Scale(n.bn_conv1,scale_param=dict(bias_term=True),in_place=True)
    n.relu1=L.ReLU(n.scale_conv1, in_place=True)#in_place 字段表示数据blob在这一层进行循环,n.relu1表示上一层的blob的名字,n.scale表示下一层的blob的名字
    n.pool1=L.Pooling(n.relu1, pool=P.Pooling.MAX, kernel_size=3, stride=2)
    
    #第二层
    n.conv2a=L.Convolution(n.pool1,kernel_size=3, stride=1,num_output=64, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn2a= L.BatchNorm(n.conv2a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale2a = L.Scale(n.bn2a,scale_param=dict(bias_term=True), in_place=True)
    n.relu2a=L.ReLU(n.scale2a, in_place=True)

    n.conv2b=L.Convolution(n.pool1,kernel_size=1, stride=1,num_output=64, pad=0,weight_filler=dict(type='msra'),bias_term=False)
    n.bn2b= L.BatchNorm(n.conv2b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale2b = L.Scale(n.bn2b, scale_param=dict(bias_term=True),in_place=True)
    
    n.conv2c=L.Convolution(n.relu2a,kernel_size=1, stride=1,num_output=64, pad=0,weight_filler=dict(type='msra'),bias_term=False)
    n.bn2c = L.BatchNorm(n.conv2c, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale2c = L.Scale(n.bn2c,scale_param=dict(bias_term=True),in_place=True)
    
    n.elt2d=L.Eltwise(n.conv2b,n.conv2c,eltwise_param=dict(operation=1))
    n.relu2d=L.ReLU(n.elt2d,in_place=True)
    #第三层
    n.conv3a=L.Convolution(n.relu2d,kernel_size=3, stride=1,num_output=64, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn3a= L.BatchNorm(n.conv3a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale3a = L.Scale(n.bn3a,scale_param=dict(bias_term=True), in_place=True)
    n.relu3a=L.ReLU(n.scale3a, in_place=True)


    n.conv3b=L.Convolution(n.relu3a,kernel_size=1, stride=1,num_output=64, pad=0,weight_filler=dict(type='msra'),bias_term=False)
    n.bn3b= L.BatchNorm(n.conv3b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale3b = L.Scale(n.bn3b, scale_param=dict(bias_term=True),in_place=True)
    
    n.elt3d=L.Eltwise(n.conv3a,n.conv3b,eltwise_param=dict(operation=1))
    n.relu3d=L.ReLU(n.elt3d,in_place=True)
#第四层
    n.conv4a=L.Convolution(n.relu3d,kernel_size=3, stride=2,num_output=128, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn4a= L.BatchNorm(n.conv4a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale4a = L.Scale(n.bn4a,scale_param=dict(bias_term=True), in_place=True)
    n.relu4a=L.ReLU(n.scale4a, in_place=True)

    n.conv4b=L.Convolution(n.relu3d,kernel_size=1, stride=2,num_output=128, pad=0,weight_filler=dict(type='msra'),bias_term=False)
    n.bn4b= L.BatchNorm(n.conv4b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale4b = L.Scale(n.bn4b, scale_param=dict(bias_term=True),in_place=True)
    
    n.conv4c=L.Convolution(n.relu4a,kernel_size=3, stride=1,num_output=128, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn4c = L.BatchNorm(n.conv4c, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale4c = L.Scale(n.bn4c,scale_param=dict(bias_term=True),in_place=True)
    
    n.elt4d=L.Eltwise(n.conv4b,n.conv4c,eltwise_param=dict(operation=1))
    n.relu4d=L.ReLU(n.elt4d,in_place=True)
    #第五层
    n.conv5a=L.Convolution(n.relu4d,kernel_size=3, stride=1,num_output=128, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn5a= L.BatchNorm(n.conv5a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale5a = L.Scale(n.bn5a,scale_param=dict(bias_term=True), in_place=True)
    n.relu5a=L.ReLU(n.scale5a, in_place=True)


    n.conv5b=L.Convolution(n.relu5a,kernel_size=3, stride=1,num_output=128, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn5b= L.BatchNorm(n.conv5b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale5b = L.Scale(n.bn5b, scale_param=dict(bias_term=True),in_place=True)
    
    n.elt5d=L.Eltwise(n.conv5a,n.conv5b,eltwise_param=dict(operation=1))
    n.relu5d=L.ReLU(n.elt5d,in_place=True)
#第六层
    n.conv6a=L.Convolution(n.relu5d,kernel_size=3, stride=2,num_output=256, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn6a= L.BatchNorm(n.conv6a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale6a = L.Scale(n.bn6a,scale_param=dict(bias_term=True), in_place=True)
    n.relu6a=L.ReLU(n.scale6a, in_place=True)

    n.conv6b=L.Convolution(n.relu5d,kernel_size=1, stride=2,num_output=256, pad=0,weight_filler=dict(type='msra'),bias_term=False)
    n.bn6b= L.BatchNorm(n.conv6b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale6b = L.Scale(n.bn6b, scale_param=dict(bias_term=True),in_place=True)
    
    n.conv6c=L.Convolution(n.relu6a,kernel_size=3, stride=1,num_output=256, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn6c = L.BatchNorm(n.conv6c, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale6c = L.Scale(n.bn6c,scale_param=dict(bias_term=True),in_place=True)
    
    n.elt6d=L.Eltwise(n.conv6b,n.conv6c,eltwise_param=dict(operation=1))
    n.relu6d=L.ReLU(n.elt6d,in_place=True)
    #第七层
    n.conv7a=L.Convolution(n.relu6d,kernel_size=3, stride=1,num_output=256, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn7a= L.BatchNorm(n.conv7a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale7a = L.Scale(n.bn7a,scale_param=dict(bias_term=True), in_place=True)
    n.relu7a=L.ReLU(n.scale7a, in_place=True)


    n.conv7b=L.Convolution(n.relu7a,kernel_size=3, stride=1,num_output=256, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn7b= L.BatchNorm(n.conv7b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale7b = L.Scale(n.bn7b, scale_param=dict(bias_term=True),in_place=True)
    
    n.elt7d=L.Eltwise(n.conv7a,n.conv7b,eltwise_param=dict(operation=1))
    n.relu7d=L.ReLU(n.elt7d,in_place=True)
#第八层
    n.conv8a=L.Convolution(n.relu7d,kernel_size=3, stride=2,num_output=512, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn8a= L.BatchNorm(n.conv8a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale8a = L.Scale(n.bn8a,scale_param=dict(bias_term=True), in_place=True)
    n.relu8a=L.ReLU(n.scale8a, in_place=True)

    n.conv8b=L.Convolution(n.relu7d,kernel_size=1, stride=2,num_output=512, pad=0,weight_filler=dict(type='msra'),bias_term=False)
    n.bn8b= L.BatchNorm(n.conv8b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale8b = L.Scale(n.bn8b, scale_param=dict(bias_term=True),in_place=True)
    
    n.conv8c=L.Convolution(n.relu8a,kernel_size=3, stride=1,num_output=512, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn8c = L.BatchNorm(n.conv8c, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale8c = L.Scale(n.bn8c,scale_param=dict(bias_term=True),in_place=True)
    
    n.elt8d=L.Eltwise(n.conv8b,n.conv8c,eltwise_param=dict(operation=1))
    n.relu8d=L.ReLU(n.elt8d,in_place=True)
    #第九层
    n.conv9a=L.Convolution(n.relu8d,kernel_size=3, stride=1,num_output=512, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn9a= L.BatchNorm(n.conv9a,batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale9a = L.Scale(n.bn9a,scale_param=dict(bias_term=True), in_place=True)
    n.relu9a=L.ReLU(n.scale9a, in_place=True)


    n.conv9b=L.Convolution(n.relu9a,kernel_size=3, stride=1,num_output=512, pad=1,weight_filler=dict(type='msra'),bias_term=False)
    n.bn9b= L.BatchNorm(n.conv9b, batch_norm_param=dict(moving_average_fraction=0.90),in_place=True)  # top=conv1
    n.scale9b = L.Scale(n.bn9b, scale_param=dict(bias_term=True),in_place=True)
    
    n.elt9d=L.Eltwise(n.conv9a,n.conv9b,eltwise_param=dict(operation=1))
    n.relu9d=L.ReLU(n.elt9d,in_place=True)


    #第十层
    n.poolaa=L.Pooling(n.relu9d, pool=P.Pooling.AVE, kernel_size=2, stride=1)
    n.fc1024_skin=L.InnerProduct(n.poolaa,param=[dict(lr_mult=1,decay_mult=1),dict(lr_mult=1,decay_mult=1)],inner_product_param=dict(num_output=5,weight_filler=dict(type='xavier'),bias_filler=dict(type='constant',value=0)))
    n.fc31_skin=L.InnerProduct(n.fc1024_skin,param=[dict(lr_mult=1,decay_mult=1),dict(lr_mult=1,decay_mult=1)],inner_product_param=dict(num_output=5,weight_filler=dict(type='xavier'),bias_filler=dict(type='constant',value=0)))
    n.loss = L.SofemaxWithLoss(n.fc31_skin, n.label)
    n.acctop1=L.Accuracy(n.fc31_skin,n.label,include=dict(phase=1),accuracy_param=dict(top_k=1))
    return n.to_proto()

def write_net():
    if not os.path.exists(train_proto):
        os.mknod(train_proto)
    with open(train_proto, 'w') as f:
        f.write(str(create_net(train_lmdb,batch_size=64)))

if __name__ == '__main__':
    write_net()

你可能感兴趣的:(使用caffe的NetSpec.py中的Python接口自动生成train.prototxt文件)