MobileNetv2-SSDLite安装和使用

MobileNetv2-SSDLite是MobileNet-SSD的升级版,其主要针对移动端对速度要求高的场合。

MobileNetv2-SSDLite安装和使用_第1张图片

使用方法


git clone https://github.com/chuanqi305/MobileNetv2-SSDLite

cd ssdlite

wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz

tar -zvxf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz

python dump_tensorflow_weights.py

python load_caffe_weights.py

如果最后一步报out of memeroy的错,把deploy.prototxt中engine: CAFFE前面的#号全部去掉。

接下来给编译之前编译的ssd(编译方法见ssd-windows,其为跨平台的方案)添加ReLU6层支持,不然会报错:

Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: ReLU6

首先找到src\caffe\proto\caffe.proto,搜索message LayerParameter,并在optional ReLUParameter relu_param = 123;之后添加optional ReLU6Parameter relu6_param = 208;

然后再搜索message ReLUParameter,在这个ReLUParameter实现结构之后添加

// Message that stores parameters used by ReLU6Layer
message ReLU6Parameter {
  enum Engine {
    DEFAULT = 0;
    CAFFE = 1;
    CUDNN = 2;
  }
  optional Engine engine = 2 [default = DEFAULT];
}

支持proto头文件修改完毕,接下来添加所需的头文件和实现文件。

1.在blob/ssd/include/caffe/layers文件夹下新建relu6_layer.hpp,将

https://github.com/chuanqi305/ssd/raw/ssd/include/caffe/layers/relu6_layer.hpp里面的内容复制进去。

2.在blob/ssd/src/caffe/layers文件夹下新建relu6_layer.cpp,将

https://raw.githubusercontent.com/chuanqi305/ssd/ssd/src/caffe/layers/relu6_layer.cpp里面的复制进去。

3.在blob/ssd/src/caffe/layers文件夹下新建relu6_layer.cu,将

https://raw.githubusercontent.com/chuanqi305/ssd/ssd/src/caffe/layers/relu6_layer.cu里面的复制进去。

大功告成,重新编译ssd。

这是如果你兴致勃勃的去跑demo_caffe.py会失望透顶,自带的coco模型需要的显存太大,我的1080Ti 11Gxian显存都不gouy够用。

这时需要我们将coco模型转换为voc模型,还好作者提供了脚本,直接运行

python coco2voc.py

即可。转换完成后再运行

python demo_caffe_voc.py

 

我自己还写了个tensorflow版本的demo,也一块发上来吧:

import tensorflow as tf
import cv2
import numpy as np

def graph_create(graphpath):
    with tf.gfile.FastGFile(graphpath, 'rb') as graphfile:
        graphdef = tf.GraphDef()
        graphdef.ParseFromString(graphfile.read())

        return tf.import_graph_def(graphdef, name='',return_elements=[
              'image_tensor:0','detection_boxes:0', 'detection_scores:0', 'detection_classes:0'])

image_tensor,  box, score, cls = graph_create("ssdlite/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb")
image_file = "images/004545.jpg"
with tf.Session() as sess:
     image = cv2.imread(image_file)
     image_data = np.expand_dims(image, axis=0).astype(np.uint8)

     b, s, c = sess.run([box, score, cls], {image_tensor: image_data})
     boxes = b[0]
     conf = s[0]
     clses = c[0]
     #writer = tf.summary.FileWriter('debug', sess.graph)

     for i in range(8):
         bx = boxes[i]
         print(boxes[i])
         print(conf[i])
         print(clses[i])
         if conf[i] < 0.5:
             continue
         h = image.shape[0]
         w = image.shape[1]
         p1 = (int(w * bx[1]), int(h * bx[0]))
         p2 = (int(w * bx[3]) ,int(h * bx[2]))
         cv2.rectangle(image, p1, p2, (0,255,0))

     cv2.imshow("mobilenet-ssd", image)
     cv2.waitKey(0) 

 

你可能感兴趣的:(深度学习)