Tensorflow Serving入门之一(Ubuntu16.04下使用docker部署tensorflow serving与测试)

  1. 环境准备:
    1. 假设已安装nvidia驱动、cuda、cudnn
    2. 假设已安装docker、docker-ce
  2. tensorflow serving的docker镜像安装与测试
    1. glone用于测试安装是否成功的源码
      git clone https://github.com/tensorflow/serving
    2. CPU版本的部署与测试
      1. 镜像下载 docker pull tensorflow/serving:lastet
      2. 通过命令docker images可以看到多了一个tensorflow/serving
      3. docker启动tf serving的cpu测试服务,以下两个命令均可
        1. docker run -dt -p 8501:8501 -v "/{你上一步clone源码的目录}/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
        2.  docker run -d -p 8501:8501 --mount type=bind,source=/home/bixian/work_space/tf_serving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu/,target=/models/half_plus_two -e MODEL_NAME=half_plus_two -t --name testserver tensorflow/serving
      4. 访问启动的服务
        1. curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
        2. 返回结果:{ "predictions": [2.5, 3.0, 4.5] },则表示测试成功
    3. GPU版本的部署与测试
      1. 安装nvidia-docker2
        1. 如果安装了nvidia-docker 1.0版本,请删除
          docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
          sudo apt-get purge -y nvidia-docker
        2. 添加仓库

          curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
            sudo apt-key add -

          distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
          curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
            sudo tee /etc/apt/sources.list.d/nvidia-docker.list
          sudo apt-get update

        3. 安装nvidia-docker2,如已配置过/etc/docker/daemon.json,可以覆盖,待安装完毕再修改
          sudo apt-get install -y nvidia-docker2
          也可通过https://hub.docker.com/r/nvidia/cuda/,查看符合自己cuda版本的docker环境的nvidia-docker
          sudo pkill -SIGHUP dockerd

        4. 查看nvidia-docker安装情况:sudo apt show nvidia-docker2

        5. 安装nvidia/cuda镜像
          我的系统是ubuntu16.04, cuda9.0,安装9.0-devel:
          docker pull nvidia/cuda:9.0-devel

        6. 测试nvidia-docker及cuda镜像运行
          docker run --runtime=nvidia --rm nvidia/cuda:9.0-devel nvidia-smi

      2. docker 启动tensorflow serving的GPU测试服务

        1. test_path=/{clone tf serving源码的目录}/serving/tensorflow_serving/servables/tensorflow/testdata
          docker run --runtime=nvidia -p 8501:8501 \
          --mount type=bind,\
          source=$test_path/saved_model_half_plus_two_gpu,\
          target=/models/half_plus_two \
          -e MODEL_NAME=half_plus_two -t tensorflow/serving:1.12.0-gpu &

        2. 访问启动的服务:
          curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
          返回结果:{ "predictions": [2.5, 3.0, 4.5] },则表示测试成功

你可能感兴趣的:(深度学习,人工智能,tensorflow)