caffe2载入预训练模型

Caffe2 载入预训练模型(Loading Pre-Trained Models)[7]

这一节我们主要讲述如何使用预训练模型。Ipython notebook链接在这里。

模型下载

你可以去Model Zoo下载预训练好的模型,或者使用Caffe2的models.download模块获取预训练的模型。caffe2.python.models.download需要模型的名字所谓参数。你可以去看看有什么模型可用,然后替换下面代码中的squeezenet

python -m caffe2.python.models.download -i squeezenet

译者注:如果不明白为什么用python -m 执行,可以看看这个帖子。
如果上面下载成功,那么你应该下载了 squeezenet到你的文件夹中。如果你使用i那么模型文件将下载到/caffe2/python/models文件夹中。当然,你也可以下载所有模型文件:git clone https://github.com/caffe2/models

Overview

在这个教程中,我们将会使用squeezenet模型进行图片的目标识别。如果,你读了前面的预处理章节,那么你会看到我们使用rescale和crop对图像进行处理。同时做了CHW和BGR的转换,最后的图像数据是NCHW。我们也统计了图像均值,而不是简单地将图像减去128.
你会发现载入预处理模型是相当简单的,仅仅需要几行代码就可以了。

  1. 读取protobuf文件
with open("init_net.pb") as f:
     init_net = f.read()
 with open("predict_net.pb") as f:
     predict_net = f.read()   
  1. 使用Predictor函数从protobuf中载入blobs数据
p = workspace.Predictor(init_net, predict_net)
  1. 跑网络并获取结果
results = p.run([img])

返回的结果是一个多维概率的矩阵,每一行是一个百分比,表示网络识别出图像属于某一个物体的概率。当你使用前面那张花图来测试时,网络的返回应该告诉你超过95的概率是雏菊。

Configuration

网络设置如下:

# 你安装caffe2的路径
CAFFE2_ROOT = "~/caffe2"
# 假设是caffe2的子目录
CAFFE_MODELS = "~/caffe2/caffe2/python/models"
#如果你有mean file,把它放在模型文件那个目录里面
%matplotlib inline
from caffe2.proto import caffe2_pb2
import numpy as np
import skimage.io
import skimage.transform
from matplotlib import pyplot
import os
from caffe2.python import core, workspace
import urllib2
print("Required modules imported.")

传递图像的路径,或者网络图像的URL。物体编码参照Alex Net,比如“985”代表是“雏菊”。其他编码参照这里。

IMAGE_LOCATION =  "https://cdn.pixabay.com/photo/2015/02/10/21/28/flower-631765_1280.jpg"

# 参数格式:  folder,      INIT_NET,          predict_net,         mean      , input image size
MODEL = 'squeezenet', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 227

# AlexNet的物体编码
codes =  "https://gist.githubusercontent.com/aaronmarkham/cd3a6b6ac071eca6f7b4a6e40e6038aa/raw/9edb4038a37da6b5a44c3b5bc52e448ff09bfe5b/alexnet_codes"
print "Config set!"

处理图像

def crop_center(img,cropx,cropy):
    y,x,c = img.shape
    startx = x//2-(cropx//2)
    starty = y//2-(cropy//2)    
    return img[starty:starty+cropy,startx:startx+cropx]

def rescale(img, input_height, input_width):
    print("Original image shape:" + str(img.shape) + " and remember it should be in H, W, C!")
    print("Model's input shape is %dx%d") % (input_height, input_width)
    aspect = img.shape[1]/float(img.shape[0])
    print("Orginal aspect ratio: " + str(aspect))
    if(aspect>1):
        # landscape orientation - wide image
        res = int(aspect * input_height)
        imgScaled = skimage.transform.resize(img, (input_width, res))
    if(aspect<1):
        # portrait orientation - tall image
        res = int(input_width/aspect)
        imgScaled = skimage.transform.resize(img, (res, input_height))
    if(aspect == 1):
        imgScaled = skimage.transform.resize(img, (input_width, input_height))
    pyplot.figure()
    pyplot.imshow(imgScaled)
    pyplot.axis('on')
    pyplot.title('Rescaled image')
    print("New image shape:" + str(imgScaled.shape) + " in HWC")
    return imgScaled
print "Functions set."

# set paths and variables from model choice and prep image
CAFFE2_ROOT = os.path.expanduser(CAFFE2_ROOT)
CAFFE_MODELS = os.path.expanduser(CAFFE_MODELS)

# 均值最好从训练集中计算得到
MEAN_FILE = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[3])
if not os.path.exists(MEAN_FILE):
    mean = 128
else:
    mean = np.load(MEAN_FILE).mean(1).mean(1)
    mean = mean[:, np.newaxis, np.newaxis]
print "mean was set to: ", mean

# 输入大小
INPUT_IMAGE_SIZE = MODEL[4]

# 确保所有文件存在
if not os.path.exists(CAFFE2_ROOT):
    print("Houston, you may have a problem.")
INIT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[1])
print 'INIT_NET = ', INIT_NET
PREDICT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[2])
print 'PREDICT_NET = ', PREDICT_NET
if not os.path.exists(INIT_NET):
    print(INIT_NET + " not found!")
else:
    print "Found ", INIT_NET, "...Now looking for", PREDICT_NET
    if not os.path.exists(PREDICT_NET):
        print "Caffe model file, " + PREDICT_NET + " was not found!"
    else:
        print "All needed files found! Loading the model in the next block."

#载入一张图像
img = skimage.img_as_float(skimage.io.imread(IMAGE_LOCATION)).astype(np.float32)
img = rescale(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
img = crop_center(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
print "After crop: " , img.shape
pyplot.figure()
pyplot.imshow(img)
pyplot.axis('on')
pyplot.title('Cropped')

# 转换为CHW
img = img.swapaxes(1, 2).swapaxes(0, 1)
pyplot.figure()
for i in range(3):
    pyplot.subplot(1, 3, i+1)
    pyplot.imshow(img[i])
    pyplot.axis('off')
    pyplot.title('RGB channel %d' % (i+1))

#转换为BGR
img = img[(2, 1, 0), :, :]

# 减均值
img = img * 255 - mean

# 增加batch size
img = img[np.newaxis, :, :, :].astype(np.float32)
print "NCHW: ", img.shape

状态输出:

Functions set.
mean was set to:  128
INIT_NET =  /home/aaron/models/squeezenet/init_net.pb
PREDICT_NET =  /home/aaron/models/squeezenet/predict_net.pb
Found  /home/aaron/models/squeezenet/init_net.pb ...Now looking for /home/aaron/models/squeezenet/predict_net.pb
All needed files found! Loading the model in the next block.
Original image shape:(751, 1280, 3) and remember it should be in H, W, C!
Model's input shape is 227x227
Orginal aspect ratio: 1.70439414115
New image shape:(227, 386, 3) in HWC
After crop:  (227, 227, 3)
NCHW:  (1, 3, 227, 227)



既然图像准备好了,那么放进CNN里面吧。打开protobuf,载入到workspace中,并跑起网络。

#初始化网络
with open(INIT_NET) as f:
    init_net = f.read()
with open(PREDICT_NET) as f:
    predict_net = f.read()
p = workspace.Predictor(init_net, predict_net)

# 进行预测
results = p.run([img])

# 把结果转换为np矩阵
results = np.asarray(results)
print "results shape: ", results.shape
results shape:  (1, 1, 1000, 1, 1)

看到1000没。如果我们batch很大,那么这个矩阵将会很大,但是中间的维度仍然是1000。它记录着模型预测的每一个类别的概率。现在,让我们继续下一步。

results = np.delete(results, 1)#这句话不是很明白
index = 0
highest = 0
arr = np.empty((0,2), dtype=object)#创建一个0x2的矩阵?
arr[:,0] = int(10)#这是什么个意思?
arr[:,1:] = float(10)
for i, r in enumerate(results):
    # imagenet的索引从1开始
    i=i+1
    arr = np.append(arr, np.array([[i,r]]), axis=0)
    if (r > highest):
        highest = r
        index = i
print index, " :: ", highest
# top 3 结果
# sorted(arr, key=lambda x: x[1], reverse=True)[:3]

# 获取 code list
response = urllib2.urlopen(codes)
for line in response:
    code, result = line.partition(":")[::2]
    if (code.strip() == str(index)):
        print result.strip()[1:-2]

最后输出:

985  ::  0.979059
daisy

译者注:上面最后一段处理结果的代码,译者也不是很明白,有木有明白的同学在下面回复下?
转载请注明出处:http://www.jianshu.com/c/cf07b31bb5f2

你可能感兴趣的:(caffe2载入预训练模型)