Keras模型生产环境的部署[源码+教程]

 

项目地址:

https://github.com/DataXujing/tensorflow-serving-Wechat

1. 需要的系统环境

(1)一台生产服务器,系统为CentOs6.5(ubuntu只需在此基础上做简单修改即可完成部署)(2)默认为Python2.X(CentOs6.5默认的Python安装版本,特别提醒,不要卸载该版本,因为很多命令行函数基于Python2.X)(3)需要安装Python3.6环境

本文将演示从零开始把Keras训练的MNIST数据集项目部署到生产环境。

需要的模块:

  • tensorflow
  • Flask
  • gevent
  • gunicorn
  • keras
  • numpy
  • h5py
  • pillow
  • uwsgi
  • supervisor

2.核心代码分享

代码目录如下:

Keras模型生产环境的部署[源码+教程]_第1张图片

其中Keras模型部分如下(对应文件结构中的train.py):

'''

Gets to 98.78% test accuracy after 12 epochs

https://dataxujing.github.io/用CNN实现MNIST数据集/

'''

#python 2/3 compatibility

from __future__ import print_function

import keras

from keras.datasets import mnist

from keras.models import Sequential

from keras.layers import Dense, Dropout, Flatten

from keras.layers import Conv2D, MaxPooling2D

from keras import backend as K

# 模型结构画出来

from keras.utils.vis_utils import plot_model

batch_size = 128

num_classes = 10

epochs = 12

# 输入图片的大小

# 28x28 pixel images.

img_rows, img_cols = 28, 28

# 数据划分

(x_train, y_train), (x_test, y_test) = mnist.load_data()

#3D数据, "channels_last" assumes (conv_dim1, conv_dim2, conv_dim3, channels)

#while "channels_first" assumes (channels, conv_dim1, conv_dim2, conv_dim3).

if K.image_data_format() == 'channels_first':

x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)

x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)

input_shape = (1, img_rows, img_cols)

else:

x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)

x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)

input_shape = (img_rows, img_cols, 1)

# 数据变换

x_train = x_train.astype('float32')

x_test = x_test.astype('float32')

x_train /= 255

x_test /= 255

print('x_train shape:', x_train.shape)

print(x_train.shape[0], 'train samples')

print(x_test.shape[0], 'test samples')

# one-hot我们的label

y_train = keras.utils.to_categorical(y_train, num_classes)

y_test = keras.utils.to_categorical(y_test, num_classes)

# 构建模型

model = Sequential()

# cov2D + Relu

model.add(Conv2D(32, kernel_size=(3, 3),

activation='relu',

input_shape=input_shape))

model.add(Conv2D(64, (3, 3), activation='relu'))

# Pooling

model.add(MaxPooling2D(pool_size=(2, 2)))

# Drop out

model.add(Dropout(0.25))

#flatten

model.add(Flatten())

#fully connected to get all relevant data

model.add(Dense(128, activation='relu'))

# Drop out

model.add(Dropout(0.5))

# softmax

model.add(Dense(num_classes, activation='softmax'))

#Adaptive learning rate (adaDelta)

model.compile(loss=keras.losses.categorical_crossentropy,

optimizer=keras.optimizers.Adadelta(),

metrics=['accuracy'])

#train

model.fit(x_train, y_train,

batch_size=batch_size,

epochs=epochs,

verbose=1,

validation_data=(x_test, y_test))

# metric

score = model.evaluate(x_test, y_test, verbose=0)

print('Test loss:', score[0])

print('Test accuracy:', score[1])

#Save the model

# serialize model to JSON

model_json = model.to_json()

with open("model.json", "w") as json_file:

json_file.write(model_json)

# serialize weights to HDF5

model.save_weights("model.h5")

print("Saved model to disk")

# plot

plot_model(model, to_file='model_lx.png',show_shapes=True)

训练后的模型会保存在文件结构中的model文件夹,供Flask调用。

Flask serve部分(对应文件结构中的app.py部分):

from flask import Flask, render_template,request

#scientific computing library for saving, reading, and resizing images

from scipy.misc import imsave, imread, imresize

import numpy as np

import keras.models

import re

import sys

import os

import

#tell our app where our saved model is

sys.path.append(os.path.abspath("./model"))

from load import *

#initalize our flask app

app = Flask(__name__)

global model, graph

model, graph = init()

#decoding an image from into raw representation

def convertImage(imgData1):

imgstr = re.search(b',(.*)',imgData1).group(1) # 匹配第一个括号

with open('output.png','wb') as output:

output.write(.b64decode(imgstr))

@app.route('/')

def index():

return render_template("index.html")

@app.route('/predict/',methods=['GET','POST'])

def predict():

imgData = request.get_data()

convertImage(imgData)

#read the image into memory

x = imread('output.png',mode='L')

#compute a bit-wise inversion so black becomes white and vice versa

x = np.invert(x)

#make it the right size

x = imresize(x,(28,28))

x = x.reshape(1,28,28,1)

with graph.as_default():

#perform the prediction

out = model.predict(x)

print(out)

print(np.argmax(out,axis=1))

#convert the response to a string

response = np.array_str(np.argmax(out,axis=1))

return response

if __name__ == "__main__":

port = int(os.environ.get('PORT', 5060))

app.run(host='0.0.0.0', port=port)

运行app.py可以测试训练的Keras模型是否正常运行。为了部署Flask应用,采用nginx+uwsgi+supervisor的方式把上述Flask应用部署到centOs服务器。

3.安装和配置uwsgi

uwsgi的配置内容如下(对应文件结构中的keras_uwsgi.ini文件):

[uwsgi]

#application's base folder

base = /home/soft/Keras_model

#python module to import

app = app

module = %(app)

home = %(base)/envKeras

#home =/root/anaconda3

pythonpath = %(base)

#socket file's location

socket = 127.0.0.1:8001

#permissions for the socket file

chmod-socket = 777

#the variable that holds a flask application inside the module imported at line #6

callable = app

#location of log files

logto = /home/soft/Keras_model/log/%n.log

chdir = /home/soft/Keras_model/

# 处理器数

processes = 4

# 线程数

threads = 2

4.安装和配置nginx

首先服务器需要安装nginx,centOs运行一下命令即可安装:

yum install -y nginx

浏览器下测试是否安装成功:

Keras模型生产环境的部署[源码+教程]_第2张图片

安装成功后将下面配置项(对应文件结构中的keras_nginx.conf)复制到/etc/nginx/conf.d/中:

server {

listen 5060;

server_name XXX.XXX.XXX.XXX;

charset utf-8;

location / {

include uwsgi_params;

uwsgi_pass unix: 127.0.0.1:8001;

}

location ~ /static/ {

root /home/soft/Keras_model;

}

}

5.安装和配置supervisor

supervisor可以在后台启动Flask应用,并且可以配置当服务器宕机时仍可以自启动Flask应用。其配置项如下(对应文件结构中的keras_supervisor.conf):

[program:app]

# 启动命令入口

command=uwsgi --ini /home/soft/Keras_model/keras_uwsgi.ini

# 命令程序所在目录

#directory=/home/soft/keras_model

#运行命令的用户名

autostart=true

autorestart=true

#日志地址

stdout_logfile=/home/soft/Keras_model/log/uwsgi_supervisor.log

stderr_logfile=/home/soft/Keras_model/log/uwsgi_supervisor_error.log

[inet_http_server]

port = 127.0.0.1:9001

[supervisord]

nodaemon=true

user=root

[supervisorctl]

6. 测试

最后运行最终的run.sh的Shell脚本就可以启动我们的基于Keras的Flask的应用,其中run.sh的脚本内容如下:

#!/bin/bash

echo "Keras+Flask!"

nginx | uwsgi --ini /home/soft/Keras_model/keras_uwsgi.ini |

supervisord -c /home/soft/Keras_model/keras_supervisor.conf

我们可以在浏览器中测试我们的MNIST,手写数字识别应用:

你可能感兴趣的:(python)