摘要:本文旨在分享Pytorch->Caffe->om模型转换流程。
Baseline:PytorchToCaffe
主要功能代码在:
PytorchToCaffe
+-- Caffe
| +-- caffe.proto
| +-- layer_param.py
+-- example
| +-- resnet_pytorch_2_caffe.py
+-- pytorch_to_caffe.py
直接使用可以参考resnet_pytorch_2_caffe.py,如果网络中的操作Baseline中都已经实现,则可以直接转换到Caffe模型。
如果遇到没有实现的操作,则要分为两种情况来考虑。
以arg_max为例分享一下添加操作的方式。
首先要查看Caffe中对应层的参数:caffe.proto为对应版本caffe层与参数的定义,可以看到ArgMax定义了out_max_val、top_k、axis三个参数:
message ArgMaxParameter {
// If true produce pairs (argmax, maxval)
optional bool out_max_val = 1 [default = false];
optional uint32 top_k = 2 [default = 1];
// The axis along which to maximise -- may be negative to index from the
// end (e.g., -1 for the last axis).
// By default ArgMaxLayer maximizes over the flattened trailing dimensions
// for each index of the first / num dimension.
optional int32 axis = 3;
}
与Caffe算子边界中的参数是一致的。
layer_param.py构建了具体转换时参数类的实例,实现了操作参数从Pytorch到Caffe的传递:
def argmax_param(self, out_max_val=None, top_k=None, dim=1):
argmax_param = pb.ArgMaxParameter()
if out_max_val is not None:
argmax_param.out_max_val = out_max_val
if top_k is not None:
argmax_param.top_k = top_k
if dim is not None:
argmax_param.axis = dim
self.param.argmax_param.CopyFrom(argmax_param)
pytorch_to_caffe.py中定义了Rp类,用来实现Pytorch操作到Caffe操作的变换:
class Rp(object):
def __init__(self, raw, replace, **kwargs):
self.obj = replace
self.raw = raw
def __call__(self, *args, **kwargs):
if not NET_INITTED:
return self.raw(*args, **kwargs)
for stack in traceback.walk_stack(None):
if 'self' in stack[0].f_locals:
layer = stack[0].f_locals['self']
if layer in layer_names:
log.pytorch_layer_name = layer_names[layer]
print('984', layer_names[layer])
break
out = self.obj(self.raw, *args, **kwargs)
return out
在添加操作时,要使用Rp类替换操作:
torch.argmax = Rp(torch.argmax, torch_argmax)
接下来,要具体实现该操作:
def torch_argmax(raw, input, dim=1):
x = raw(input, dim=dim)
layer_name = log.add_layer(name='argmax')
top_blobs = log.add_blobs([x], name='argmax_blob'.format(type))
layer = caffe_net.Layer_param(name=layer_name, type='ArgMax',
bottom=[log.blobs(input)], top=top_blobs)
layer.argmax_param(dim=dim)
log.cnet.add_layer(layer)
return x
即实现了argmax操作Pytorch到Caffe的转换。
如果要转换的操作在Caffe中无直接对应的层实现,解决思路主要有两个:
1)在Pytorch中将不支持的操作分解为支持的操作:
如nn.InstanceNorm2d,实例归一化在转换时是用BatchNorm做的,不支持 affine=True 或者track_running_stats=True,默认use_global_stats:false,但om转换时use_global_stats必须为true,所以可以转到Caffe,但再转om不友好。
InstanceNorm是在featuremap的每个Channel上进行归一化操作,因此,可以实现nn.InstanceNorm2d为:
class InstanceNormalization(nn.Module):
def __init__(self, dim, eps=1e-5):
super(InstanceNormalization, self).__init__()
self.gamma = nn.Parameter(torch.FloatTensor(dim))
self.beta = nn.Parameter(torch.FloatTensor(dim))
self.eps = eps
self._reset_parameters()
def _reset_parameters(self):
self.gamma.data.uniform_()
self.beta.data.zero_()
def __call__(self, x):
n = x.size(2) * x.size(3)
t = x.view(x.size(0), x.size(1), n)
mean = torch.mean(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x)
var = torch.var(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x)
gamma_broadcast = self.gamma.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x)
beta_broadcast = self.beta.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x)
out = (x - mean) / torch.sqrt(var + self.eps)
out = out * gamma_broadcast + beta_broadcast
return out
但在验证HiLens Caffe算子边界中发现,om模型转换不支持Channle维度之外的求和或求均值操作,为了规避这个操作,我们可以通过支持的算子重新实现nn.InstanceNorm2d:
class InstanceNormalization(nn.Module):
def __init__(self, dim, eps=1e-5):
super(InstanceNormalization, self).__init__()
self.gamma = torch.FloatTensor(dim)
self.beta = torch.FloatTensor(dim)
self.eps = eps
self.adavg = nn.AdaptiveAvgPool2d(1)
def forward(self, x):
n, c, h, w = x.shape
mean = nn.Upsample(scale_factor=h)(self.adavg(x))
var = nn.Upsample(scale_factor=h)(self.adavg((x - mean).pow(2)))
gamma_broadcast = self.gamma.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x)
beta_broadcast = self.beta.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x)
out = (x - mean) / torch.sqrt(var + self.eps)
out = out * gamma_broadcast + beta_broadcast
return out
经过验证,与原操作等价,可以转为Caffe模型
2)在Caffe中通过利用现有操作实现:
在Pytorch转Caffe的过程中发现,如果存在featuremap + 6这种涉及到常数的操作,转换过程中会出现找不到blob的问题。我们首先查看pytorch_to_caffe.py中add操作的具体转换方法:
def _add(input, *args):
x = raw__add__(input, *args)
if not NET_INITTED:
return x
layer_name = log.add_layer(name='add')
top_blobs = log.add_blobs([x], name='add_blob')
if log.blobs(args[0]) == None:
log.add_blobs([args[0]], name='extra_blob')
else:
layer = caffe_net.Layer_param(name=layer_name, type='Eltwise',
bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs)
layer.param.eltwise_param.operation = 1 # sum is 1
log.cnet.add_layer(layer)
return x
可以看到对于blob不存在的情况进行了判断,我们只需要在log.blobs(args[0]) == None条件下进行修改,一个自然的想法是利用Scale层实现add操作:
def _add(input, *args):
x = raw__add__(input, *args)
if not NET_INITTED:
return x
layer_name = log.add_layer(name='add')
top_blobs = log.add_blobs([x], name='add_blob')
if log.blobs(args[0]) == None:
layer = caffe_net.Layer_param(name=layer_name, type='Scale',
bottom=[log.blobs(input)], top=top_blobs)
layer.param.scale_param.bias_term = True
weight = torch.ones((input.shape[1]))
bias = torch.tensor(args[0]).squeeze().expand_as(weight)
layer.add_data(weight.cpu().data.numpy(), bias.cpu().data.numpy())
log.cnet.add_layer(layer)
else:
layer = caffe_net.Layer_param(name=layer_name, type='Eltwise',
bottom=[log.blobs(input), log.blobs(args[0])], top=top_blobs)
layer.param.eltwise_param.operation = 1 # sum is 1
log.cnet.add_layer(layer)
return x
类似的,featuremap * 6这种简单乘法也可以通过同样的方法实现。
zero = nn.Conv2d(in_channels, self.channel_pad, kernel_size=3, padding=1, bias=False)
nn.init.constant(self.zero.weight, 0)
pad_tensor = zero(x)
x = torch.cat([x, pad_tensor], dim=1)
本文分享自华为云社区《Pytorch->Caffe模型转换》,原文作者:杜甫盖房子 。
点击关注,第一时间了解华为云新鲜技术~