tensorflow简易部署

部署

docker模式:

拉下服务镜像
docker pull tensorflow/serving

后台模式启动一个容器:

sudo docker run -d --name serving_base tensorflow/serving

复制模型到镜像里去:
sudo docker cp /home/hiicy/redldw/mpf/model/mobile aee88:/models/mobile
最后,通过改变“模型名称”以匹配模型名称来提交服务于您的模型的容器:
sudo docker commit --change "ENV MODEL_NAME mobile " serving_base my_serve

最后跑我们的镜像到容器:
sudo docker run -p 8501:8501 -e MODEL_BASH_PATH=/models/mobile -t my_serve

tensorflow-model-server:

安装(Ubuntu):
添加源url:

echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
        curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -

然后

sudo apt-get update && sudo  apt-get install tensorflow-model-server

更新到最新版本:

sudo apt-get upgrade tensorflow-model-server

模型转化:

import tensorflow as tf
tf.keras.backend.set_learning_phase(0)
model = tf.keras.models.load_model("F:\Resources\model\mobilevnet2.h5")
export_path = r'F:\Resources\model\mobile'
with tf.keras.backend.get_session() as sess:
    tf.saved_model.simple_save(
        sess,export_path,
        inputs={"input_image":model.input},
        outputs={t.name:t for t in model.outputs}
    )

模型结构:(使用saved_model)

mobile
└── 1
    ├── saved_model.pb
    └── variables
        ├── variables.data-00000-of-00001
        └── variables.index

在 导出脚本更新保存目录2,TensorFlow Serving 会自动检测出 my_image_classifier 目录下模型的新版本,并在服务器中更新它

启动:

tensorflow_model_server --model_base_path=/home/hiicy/redldw/mpf/model/mobile --rest_api_port=9000 --model_name=MobV2

--rest_api_port:TensorFlow Serving 会在 8500 端口启动一个 gRPC ModelServer,并且 RESET API 可在 9000 端口调用

--model_name:这是你用于发送 POST 请求的服务器的名称。你可以输入任何名称

测试:(testApi.py)

import numpy as np
import requests
import json
from keras.applications import mobilenet_v2
from keras.preprocessing import image
import argparse
import re
ap = argparse.ArgumentParser()
ap.add_argument('-i','--image',required=True,
                )
args = vars(ap.parse_args())
image_path=args['image']
#image_path = image_path.replace("\u202a","")
# con = re.compile('(?=e\:).*')
# res = re.search(con,image_path)
# if res.group(0):
#     print(res.group(0))
#     image_path = re.sub(con,'',image_path)
img = image.img_to_array(image.load_img(image_path,target_size=(224,224)))/255.
img = img.astype('float16')
payload = {
    'instances':[{'input_image':img.tolist()}]
}
r = requests.post('http://localhost:9000/v1/models/MobV2:predict',json=payload)
pred = json.loads(r.content.decode('utf-8'))

print(json.dumps(mobilenet_v2.decode_predictions(np.array(pred['predictions']))[0]))

使用 TensorFlow Serving 和 Flask 部署 Keras 模型

你可能感兴趣的:(tensorflow简易部署)