Flask部署之视频活体检测

一、Docker image建立

国内网速原因,构建docker image分多次进行

1.1 搭建cuda10.1_cudnn7基础镜像+工具包

# Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
ENV PYTHONUNBUFFERED TRUE
RUN apt-get update && apt-get install sudo
RUN apt-get update && \
 DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
 fakeroot ca-certificates dpkg-dev g++ python3-dev \
 openjdk-8-jdk-headless \
 libglib2.0-dev libgl1-mesa-dev libxrender1 libgl1-mesa-glx libxext-dev \
 curl vim wget git \
 && rm -rf /var/lib/apt/lists/* \
 && cd /tmp \
 && curl -O https://bootstrap.pypa.io/get-pip.py \
 && python3 get-pip.py

RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN update-alternatives --install /usr/local/bin/pip pip /usr/local/bin/pip3 1

RUN pip install --no-cache-dir -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com numpy==1.18.5 insightface mtcnn scipy==1.4.1 matplotlib pillow opencv-python==4.5.1.48 opencv-contrib-python==4.5.2.52 django keras==2.2.4 jupyterlab imutils==0.5.4 jieba==0.42.1 uwsgi onnx==1.5.0 onnxruntime==1.7.0 pyyaml albumentations pretrainedmodels flask gunicorn gevent

RUN useradd -m model-server && mkdir -p /home/model-server/tmp

LABEL maintainer="[email protected]"
docker build -t cuda10.1_cudnn7:v1 .

1.2 增加torch工具包

FROM cuda10.1_cudnn7:v1
RUN apt-get update
RUN pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
docker build -t cu101_torch:v1 .

pytorch版本与cuda说明: Previous PyTorch Versions | PyTorch

1.3 增加tensorflow

FROM cu101_torch:v1
RUN apt-get update
RUN pip install --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com tensorflow-gpu==2.3.0 tensorflow-serving-api-gpu==2.3.0
WORKDIR /home/model-server

注:增加了设置工作目录

docker build -t cu101_torch_tf:v1 .

tensorflow版本与cuda说明: 从源代码构建 | TensorFlow (google.cn)

1.4 增加mxnet

FROM cu101_torch_tf:v1
RUN apt-get update
RUN pip install --upgrade mxnet-cu101mkl==1.5.0 -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com -f https://dist.mxnet.io/python/cu101mkl

mxnet版本与gpu说明:Get Started | Apache MXNet

1.5 增加multi_model_server(暂时不需要)

FROM torch_tf_mx:v1

RUN apt-get update
RUN pip install --no-cache-dir -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com multi-model-server

COPY dockerd-entrypoint.sh /usr/local/bin/dockerd-entrypoint.sh
COPY config.properties /home/model-server
RUN chmod +x /usr/local/bin/dockerd-entrypoint.sh && chown -R model-server /home/model-server
RUN chmod +x /usr/local/bin/dockerd-entrypoint.sh && chown -R model-server /home/model-server

#EXPOSE 8080 8081
USER model-server
ENV TEMP=/home/model-server/tmp
ENTRYPOINT ["/usr/local/bin/dockerd-entrypoint.sh"]
CMD ["serve"]

其中配置文件 config.properties跟dockerd-entrypoint.sh文件下载地址在:multi-model-server/docker at master · awslabs/multi-model-server (github.com)

dockerd-entrypoint.sh从win传到linux系统,由于编码不同会导致报错:无此文件

standard_init_linux.go:219: exec user process caused: no such file or direct.

安装dos2unix并进行转换

yum install dos2unix  # centos
apt-get install dos2uix # ubantu

dos2unix Dockerfile
dos2unix dockerd-entrypoint.sh.sh

二、start.py文件构建

与django部署不同,django部署文件/夹需要分块与分级,系统功能也在不同文件中,而flask的主代码全放在start.py文件中

2.1导包

import flask
from flask import request, Flask

2.2 定义app

app = Flask(__name__)

2.3 设置调用url的函数

@app.route('/live_video/', methods=['GET','POST'])
def live_videofile(name):
 if name == 'file':
 result = livenet.handle()
 elif name == 'base64':
 result = livenet.handle(datatype='base64')
 return result

@app.route('/live_picture/', methods=['GET','POST'])
def live_picture():
 result = livenet.handle(datatype='base64',filetype='picture')
 return result

注: 这里利用 缺省关键字进行了集中函数处理,简化代码

2.4 获取request数据

file格式:

data = flask.request.files.get('data')

json格式:

request_data = json.loads(request.data.decode('utf-8'))
data = request_data["data"]

2.5 对视频文件进行保存

file格式:

data.save(self.video_file)

json格式:

f = open(self.video_file, 'wb')
video_bytes = base64.b64decode(data)
f.write(video_bytes)
f.close()

2.6 设置日志屏幕输出

方式1:flask.current_app

from flask import current_app
current_app.logger.info('detection cost {:.3f} s'.format(time.time() - preprocess_start))

方式2:logging

import logging
logging.info("inference is ok")

方式3:app.logger

app.logger.error('this is error message')

2.7 运行代码

if __name__ == '__main__':
 app.run(debug=True)

三、配置文件gunicorn.conf.py

import logging
import logging.handlers
from logging.handlers import WatchedFileHandler
import os
# import multiprocessing

workers = 5    # 定义同时开启的处理请求的进程数量,根据网站流量适当调整
worker_class = "gevent"   # 采用gevent库,支持异步处理请求,提高吞吐量
bind = "0.0.0.0:8100"  # 绑定端口 需要与docker run -p对上
backlog = 512                #监听队列
# chdir = '/home/model-server/lives_flask'  #gunicorn要切换到的目的工作目录
pidfile = 'gunicorn.pid' # 设置进程文件目录
worker_connections = 2000 # 设置最大并发量
timeout = 30      #超时
# workers = multiprocessing.cpu_count() * 2 + 1    #进程数
loglevel = 'info' #日志级别,这个日志级别指的是错误日志的级别,而访问日志的级别无法设置
access_log_format = '%(t)s %(p)s %(h)s "%(r)s" %(s)s %(L)s %(b)s %(f)s" "%(a)s"'    #设置gunicorn访问日志格式,错误日志无法设置
accesslog = "/home/model-server/lives_flask/gunicorn_access.log"      #访问日志文件
errorlog = "/home/model-server/lives_flask/gunicorn_error.log"        #错误日志文件
# 代码发生变化是否自动重启
reload=True
logging.getLogger(accesslog)
logging.getLogger(errorlog)

四、创建容器

4.1 方式一: 把CMD 命令写进Dockerfile构建新镜像,使得容器自动开启命令.

Dockerfile

# Dockerfile
FROM torch_tf_mx:v1
COPY lives /home/model-server/lives
# RUN chmod a+=wr /sever_yj/lives
WORKDIR /home/model-server/lives
CMD ["gunicorn", "start:app", "-c", "./gunicorn.conf.py"]
docker build -t live_flask .

容器

docker --name lives \
-d -p 8100:8100 \
live_flask:v1

4.2 方式二:命令集中在构建容器命令中(推荐)

nvidia-docker run -it --name lives \
-d \
-p 8100:8100 \
-v /home/yangjian/Project_serving/lives_flask/lives:/home/model-server/lives_flask \
-w /home/model-server/lives_flask \
torch_tf_mx:v1 \
gunicorn start:app -c gunicorn.conf.py

参数说明:

  • -it : 以交互模式运行容器

  • --name : 给容器命名,如果省略,会随机起一个名

  • -d : 后台运行容器,并返回容器ID

  • -p : 端口暴露 格式为:主机(宿主)端口:容器端口

  • -v : 挂载 格式为:主机文件夹地址:容器文件夹地址

  • -m : 设置容器使用内存最大值

  • -b : 绑定端口

  • -w : 设置容器内的工作目录,该命令必须在选镜像前. 如果未设置此命令,则容器工作目录按镜像的默认工作目录来,torch_tf_mx:v1的默认工作目录为:/home/mode-server.故gunicorn开启服务命令需要修改为:

gunicorn lives_flask.start:app -c ./lives_flask/gunicorn.conf.py

注意:

1. start.py为 import方式,故书写为lives_flask.start,而不是lives_flask/start

2.如果要用gpu,必须要用nvidia-docker启动,否则报错:

gunicorn.errors.HaltServer: 

由于看不到具体报错内容,打开gunicorn_error.log:

File "/usr/lib/python3.6/ctypes/__init__.py", line 348, in __init__
 self._handle = _dlopen(self._name, mode)
OSError: libcuda.so.1: cannot open shared object file: No such file or directory

3.指定gpu

nvidia-docker run -it --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1

如果不用配置文件启动,也可以改为命令:

gunicorn -w 2 -b 127.0.0.1:8100 start:app

你可能感兴趣的:(Flask部署之视频活体检测)