主要还是用yolov3-dnn这一套去调用,分两个版本,一个是CPU版,一个是GPU版。
其实CPU下面的很简单,需要一个requirements,系统环境其实限制不大。
requirements.txt :
opencv-python==4.2.0.32
numpy
然后说一下项目目录都有哪些东西:
/docker-yolo
/cfg
voc.names
yolov3.cfg
yolov3.weights
Dockerfile
requirements.txt
yolo.py
其中cfg下面的东西如果你看过我的其他帖子应该很容易明白从哪里获得这些文件。
下面说一下yolo.py,这个东西也是之前的flask那个版本里面做出来的,去掉了flask相关的东西,把YOLO直接写成一个函数:(代码不可运行,我屏蔽了一些不方便放出来的东西)
# -*- coding: utf-8 -*-#
import os
import cv2 as cv
import cv2
import time
import json
import numpy as np
def initialization():
yolo_dir = './cfg'
weightsPath_1 = os.path.join(yolo_dir, 'yolov3_1.weights')
configPath_1 = os.path.join(yolo_dir, 'yolov3_1.cfg')
labelsPath_1 = os.path.join(yolo_dir, 'voc_x.names')
CONFIDENCE_1 = 0.50
THRESHOLD_1 = 0.45
net_2 = cv.dnn.readNetFromDarknet(configPath_1, weightsPath_1)
weightsPath_2 = os.path.join(yolo_dir, 'yolov3_2.weights')
configPath_2 = os.path.join(yolo_dir, 'yolov3_2.cfg')
labelsPath_2 = os.path.join(yolo_dir, 'coco_origin.names')
CONFIDENCE_2 = 0.50
THRESHOLD_2 = 0.45
net_1 = cv.dnn.readNetFromDarknet(configPath_2, weightsPath_2)
return net_2,net_1,CONFIDENCE_1,THRESHOLD_1,CONFIDENCE_2,THRESHOLD_2,labelsPath_1,labelsPath_2
def compute_iou(rec1, rec2):
pass
def NMS_2th(a,thresh):
pass
def yolo(imgPath,net_2,net_1,CONFIDENCE_1,THRESHOLD_1,CONFIDENCE_2,THRESHOLD_2,labelsPath_1,labelsPath_2):
s = time.time()
img = cv.imread(imgPath)
blobImg = cv.dnn.blobFromImage(img, 1.0/255.0, (416, 416), None, True, False)
net_2.setInput(blobImg)
# For GPU
# net_2.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
# net_2.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
outInfo = net_2.getUnconnectedOutLayersNames()
layerOutputs = net_2.forward(outInfo)
(H, W) = img.shape[:2]
boxes = []
confidences = []
classIDs = []
for out in layerOutputs:
for detection in out:
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
if confidence > CONFIDENCE_1:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(classID)
idxs = cv.dnn.NMSBoxes(boxes, confidences, CONFIDENCE_1, THRESHOLD_1)
with open(labelsPath_1, 'rt') as f:
labels_* = f.read().rstrip('\n').split('\n')
need_NMS_list = []
if len(idxs) > 0:
for i in idxs.flatten():
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
if(x<0):
x=0
if(y<0):
y=0
if(x+w>W):
w = W-x
if(y+h>H):
h = H-y
label = labels_*[classIDs[i]]
need_NMS_list.append(str(x)+","+str(y)+","+str(w+x)+","+str(y+h)+","+label+","+str(confidences[i]))
need_NMS_list = NMS_2th(need_NMS_list,thresh = 0.3)
times = time.time() - s
np.random.seed(42)
lab = []
loc = []
data={}
info = []
flag = 1000
flag_str = ""
if(len(need_NMS_list)>0):
for NMS_2th_str in need_NMS_list:
data_2th = NMS_2th_str.split(",")
p1_x = int(data_2th[0])
p1_y = int(data_2th[1])
p2_x = int(data_2th[2])
p2_y = int(data_2th[3])
label = data_2th[4]
prob = float(data_2th[5])
info.append({"label":label,"confidences":prob})
else:
LABELS = open(labelsPath_2).read().strip().split("\n")
nclass = len(LABELS)
np.random.seed(42)
net_1.setInput(blobImg)
# For GPU
# net_1.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
# net_1.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
outInfo = net_1.getUnconnectedOutLayersNames()
layerOutputs = net_1.forward(outInfo)
(H, W) = img.shape[:2]
boxes = []
confidences = []
classIDs = []
for out in layerOutputs:
for detection in out:
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
if confidence > CONFIDENCE_2:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(classID)
idxs = cv.dnn.NMSBoxes(boxes, confidences, CONFIDENCE_2, THRESHOLD_2)
with open(labelsPath_2, 'rt') as f:
labels_** = f.read().rstrip('\n').split('\n')
flag = 1002
if len(idxs) > 0:
for i in idxs.flatten():
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
if(LABELS[classIDs[i]]=="****"):
flag = 1001
if(flag==1000):
flag_str = "True"
if(flag==1001):
flag_str = "True"
if(flag==1002):
flag_str = "False"
info.append({"label":"None","*_confidences":0.00})
data['data']=info
return data
if __name__ == '__main__':
upload_path = "0d23f90f-0d9e-49a4-91ef-a2974ee3f918.png"
net_2,net_1,CONFIDENCE_1,THRESHOLD_1,CONFIDENCE_2,THRESHOLD_2,labelsPath_1,labelsPath_2 = initialization()
data = yolo(upload_path,net_2,net_1,CONFIDENCE_1,THRESHOLD_1,CONFIDENCE_2,THRESHOLD_2,labelsPath_1,labelsPath_2)
print(data)
最后是Dockerfile的编写:
FROM python:3.6
WORKDIR /app
COPY . .
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
ENTRYPOINT [ "python", "yolo.py" ]
编译镜像的话就是Docker的知识了
#在su下
docker build -t yolo-cpu/v1.0 .
docker run -it yolo-cpu/v1.0
难点在于找到合适的基础镜像、完成CUDA和opencv的联合编译、挂载GPU设备这三个问题。
先说一下物理机环境:
Ubuntu18.04,RTX2080,NVIDIA-DRIVER=440.95.01,CUDA=10.1,CUDNN=7.6.5,ANACONDA=5.2.0,PYTHON=3.6.5,CPU嘛是个intel-i7-9700K。
还是先说requirements.txt
numpy
cmake
然后是Dockerfile
FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
WORKDIR /app
COPY . .
RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list
RUN apt-get clean
RUN apt-get update && apt-get upgrade -y && apt-get install python3.6 python3-pip build-essential libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev libgtk-3-dev libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgphoto2-dev libavresample-dev -y
RUN update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2
RUN update-alternatives --list python
RUN python -V
RUN pip3 install -i https://mirrors.aliyun.com/pypi/simple/ -r requirements.txt
基础镜像主要由nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04这个配置,里面放好了CUDA和CUDNN
然后是拷贝项目文件(文件目录一会说),修改source源
安装一些依赖文件和最重要的Python3
接着把Python3设为系统默认版本
装个Python-pip,用pip安装requirements
到这里就把我们需要的基础镜像配置完了,编译命令是:
docker build -t yolo-env/v1.0 .
我们之前的步骤是把一些很基础的东西封装了一下,下面直接导入就可以。
Dockerfile如下:
FROM yolo-env/v1.0:latest
WORKDIR /app
COPY . .
ENV CUDA_HOME /usr/local/cuda
ENV PATH "/usr/local/cuda-10.1/bin:$PATH"
RUN chmod 777 script.sh
RUN ./script.sh
还是要拷贝项目文件,配置一下CUDA的环境变量
执行这个脚本,我把所有跟编译有关系的东西都放进了脚本
script.sh如下:
cd /app/opencv4.2/opencv-4.2.0/
rm -rf build/
mkdir build/
cd build/
cmake -D CMAKE_BUILD_TYPE=RELEASE -D PYTHON_DEFAULT_EXECUTABLE=$(python -c "import sys; print(sys.executable)") -D PYTHON3_EXECUTABLE=$(python -c "import sys; print(sys.executable)") -D PYTHON3_NUMPY_INCLUDE_DIRS=$(python -c "import numpy; print (numpy.get_include())") -D PYTHON3_PACKAGES_PATH=$(python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") -D CMAKE_INSTALL_PREFIX=/usr/local -D CUDA_ARCH_BIN='7.5' -D WITH_TBB=ON -D WITH_V4L=ON -D WITH_OPENGL=ON -D WITH_CUDA=ON -D WITH_CUDNN=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D CUDA_NVCC_FLAGS="-D_FORCE_INLINES" -D WITH_CUBLAS=1 -D OPENCV_EXTRA_MODULES_PATH=/app/opencv4.2/opencv_contrib-4.2.0/modules -D OPENCV_GENERATE_PKGCONFIG=ON ..
make -j8
make install
有几个地方需要注意下:
① -D CUDA_ARCH_BIN='7.5' 要写你自己的物理机算力值
②-D OPENCV_EXTRA_MODULES_PATH=/app/opencv4.2/opencv_contrib-4.2.0/modules 这个地方要配置一下modules的位置
③这几个文件从这里下载:
OpenCV 4.2.0:
https://github.com/opencv/opencv/releases/tag/4.2.0
OpenCV Contib 4.2.0:
https://github.com/opencv/opencv_contrib/releases/tag/4.2.0
ippicv_2019_lnx_intel64_general_20180723.tgz
链接: https://pan.baidu.com/s/1eyG7mqKLY6CvUQdh7CbUgA 提取码: 1miu (感谢好心人分享)
局部目录是这个样子的:
/home/ubuntu/opencv4.2/
opencv-4.2.0/
opencv_contrib-4.2.0/
ippicv_2019_lnx_intel64_general_20180723.tgz
④ 在opencv4.2\opencv-4.2.0这个里面有一个CMakeLists.txt,第17行左右加一句话:
include_directories("modules")
⑤然后下载一些文件:
boostdesc_bgm.i
boostdesc_bgm_bi.i
boostdesc_bgm_hd.i
boostdesc_lbgm.i
boostdesc_binboost_064.i
boostdesc_binboost_128.i
boostdesc_binboost_256.i
vgg_generated_120.i
vgg_generated_64.i
vgg_generated_80.i
vgg_generated_48.i
点击这里(z7dp),把他们复制到opencv_contrib/moudles/xfeatures2d/src
⑥如果有问题请参考:https://blog.csdn.net/Andrwin/article/details/108826443
到这里基本准备妥当,文件目录如下:
/Docker
/CPU-yolo-docker
/cfg
...
/GPU-yolo-docker
/init-docker
Dockerfile
requirements.txt
/runtime-docker
/cfg
voc.names
yolov3.cfg
yolov3.weights
/opencv4.2
ippicv_2019_lnx_intel64_general_20180723.tgz
/ippicv_2019_lnx_intel64_general_20180723
/ippicv_lnx
/icv
/iw
EULA.txt
support.txt
third-party-programs.txt
/opencv_contrib-4.2.0
/.github
/doc
/modules
/samples
...
/opencv-4.2.0
/3rdparty
/apps
/build
/cmake
/data
/doc
/include
/modules
/platforms
...
213314.png
Dockerfile
script.sh
yolo.py
在Dockerfile目录下面:
docker build -t yolo-gpu/v1.0 .
docker run -it --gpus all yolo-gpu/v1.0:latest
python yolo.py
其中yolo.py文件和上面一样,那几句话取消注释就可以了。
就这样,Dockerfile编写完成,第一次接触docker-NVIDIA,这里面坑有点多,花了我大概8个小时才弄完
补充:
中间可能会出现一些红字,找不到这个或者找不到那个,而且下载失败。这不重要,只要可以编译结束,无论是docker编译镜像还是gcc编译opencv,只要可以编译结束,问题都不大!
我猜应该是中间报错的地方我们并没有用到,很多错误我并不知道怎么修正,不过并不影响最后的运行。