tensorflow serving docker 环境配置及模型部署

tf serving docker 环境配置

1 ubuntu docker 安装

# pre setup
sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# install docker-ce
sudo apt-get update
sudo apt-get install docker-ce

# start docker service
sudo systemctl start docker

2 拉取 tensorflow serving 镜像

# pull tensorflow serving image
sudo docker pull tensorflow/serving

# some other cmds to check docker working status
# list docker images
sudo docker images

# check the working docker images
sudo docker ps

# kill the running docker image by it's container id
sudo docker stop [docker image container id]

3 启动 docker 中的 tf serving 镜像用于服务保存的 tf model

1) tf serving 服务单个模型

sudo docker run -p 8501:8501 \ 
--mount type=bind,source=/home/shuai/model/sfew,target=/models/sfew \ 
-v /home/shuai/model/tmp:/tmp  \ 
-t tensorflow/serving &

# -p tf serving 监听端口,8501为 REST API 模式
# --mount type 中 source 为本地模型存放路径(其子文件夹为模型版本),target 为挂载到docker 镜像内tf serving 管理的模型路径(一般直接默认/models/[模型名称])
# -v 表示挂载本地一个路径[/home/shuai/model/tmp]到 docker 中,这样在docker image中可以直接访问该路径
# -t 启动的 docker 镜像名称

2) tf serving 服务多个模型

docker run -p 8501:8501 \ 
--mount type=bind,source=/home/shuai/model/tiny_face,target=/models/tiny_face \ 
--mount type=bind,source=/home/shuai/model/sfew,target=/models/sfew \ 
--mount type=bind,source=/home/shuai/model/model.conf,target=/config/model.conf \ 
-v /home/shuai/model/tmp:/tmp \ 
-t tensorflow/serving \ 
--model_config_file=/config/model.conf &

# --mount type ,这里挂载了两个模型到 tf serving中,同时多出了一个模型配置文件,用以说明 tf serving 应该维护的模型有哪些
# -model_config_file 启动的 tf serving 时制定model config file
多个模型时本地路径格式如下

model/
├── model.conf
├── sfew
│ └── 1
│ ├── saved_model.pb
│ └── variables
│ ├── variables.data-00000-of-00001
│ └── variables.index
├── tiny_face
│ └── 1
│ ├── saved_model.pb
│ └── variables
│ ├── variables.data-00000-of-00001
│ └── variables.index
└── tmp

model.conf 格式如下

model_config_list: {
  config: {
    name:  "sfew",
    base_path:  "/models/sfew",
    model_platform: "tensorflow",
    model_version_policy: {
        all: {}
    }
  },
  config: {
    name:  "tiny_face",
    base_path:  "/models/tiny_face",
    model_platform: "tensorflow",
    model_version_policy: {
        all: {}
    }
  }
}

4 验证 tf serving 是否工作

curl -d '{"instances": ["/tmp/***.png"]}' \ 
-X POST http://localhost:8501/v1/models/sfew:predict

# 会返回模型对应的输出结果(具体输入输出依照模型的输入输出而定)

5 tf model生成大家参考tensorflow官方的mnist案例

你可能感兴趣的:(tensorflow,serving,deep,learning,model,docker,python,tensorflow)