keras、tensorflow serving踩坑

keras、tensorflow serving踩坑

转自:https://www.jianshu.com/p/91aae37f1da6

这两天算法同事那边用keras训练了个二分类的模型。

有一个新的需求是把keras模型跑到 tensorflow serving上 (TensorFlow Serving 系统用于在生产环境中运行模型)。

在这之前我并没有接触过keras、tensorflow , 官方教程和一堆的博客论坛资料有些过时,(keras 模型转 tensorflow 模型的示例代码跑不动),过程不太顺利,于是花了一天学习 keras、 tensorlow, 写个小demo,再追踪一下keras和tensorflow源代码,耗时两天终于把这个需求实现了。这里记录填坑过程。

keras模型转 tensorflow模型

我把 keras模型转tensorflow serving模型所使用的方法如下:

1、要拿到算法训练好的keras模型文件(一个HDF5文件)

该文件应该包含:

  • 模型的结构,以便重构该模型
  • 模型的权重
  • 训练配置(损失函数,优化器等)
  • 优化器的状态,以便于从上次训练中断的地方开始

2、编写 keras模型转tensorflow serving模型的代码

import tensorflow as tf
from keras import backend as K
from keras.models import Sequential, Model
from os.path import isfile

def build_model():
    model = Sequential()
    # 省略这部分代码,根据算法实际情况填写
    return model

def save_model_to_serving(model, export_version, export_path='prod_models'):
    print(model.input, model.output)
    signature = tf.saved_model.signature_def_utils.predict_signature_def(                                                                        
        inputs={'voice': model.input}, outputs={'scores': model.output})
    export_path = os.path.join(
        tf.compat.as_bytes(export_path),
        tf.compat.as_bytes(str(export_version)))
    builder = tf.saved_model.builder.SavedModelBuilder(export_path)
    legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
    builder.add_meta_graph_and_variables(
        sess=K.get_session(),                                                                                                                    
        tags=[tf.saved_model.tag_constants.SERVING],                                                                                             
        signature_def_map={                                                                                                                      
            'voice_classification': signature,                                                                                                                     
        },
        legacy_init_op=legacy_init_op)
    builder.save()

if __name__ == '__main__':
    model = build_model()
    model.compile(loss='categorical_crossentropy',
                  optimizer='xxx', # 用实际算法情况替换这里的xxx
                  metrics=['xxx'])
    model.summary()

    checkpoint_filepath = 'weights.hdf5'
    if (isfile(checkpoint_filepath)):
        print('Checkpoint file detected. Loading weights.')
        model.load_weights(checkpoint_filepath) # 加载模型
    else:
        print('No checkpoint file detected.  Starting from scratch.')

    export_path = "test_model"
    save_model_to_serving(model, "1", export_path)

上面的例子将模型保存到 test_model目录下
test_model目录结构如下:

test_model/
└── 1
    ├── saved_model.pb
    └── variables
        ├── variables.data-00000-of-00001
        └── variables.index

saved_model.pb 是能在 tensorflow serving跑起来的模型。

3、跑模型

tensorflow_model_server --port=8500 --model_name="voice" --model_base_path="/home/yu/workspace/test/test_model/"

标准输出如下(算法模型已成功跑起来了):

2018-02-08 16:28:02.641662: I tensorflow_serving/model_servers/main.cc:149] Building single TensorFlow model file config:  model_name: voice model_base_path: /home/yu/workspace/test/test_model/
2018-02-08 16:28:02.641917: I tensorflow_serving/model_servers/server_core.cc:439] Adding/updating models.
2018-02-08 16:28:02.641976: I tensorflow_serving/model_servers/server_core.cc:490]  (Re-)adding model: voice
2018-02-08 16:28:02.742740: I tensorflow_serving/core/basic_manager.cc:705] Successfully reserved resources to load servable {name: voice version: 1}
2018-02-08 16:28:02.742800: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: voice version: 1}
2018-02-08 16:28:02.742815: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: voice version: 1}
2018-02-08 16:28:02.742867: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /home/yu/workspace/test/test_model/1
2018-02-08 16:28:02.742906: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:236] Loading SavedModel from: /home/yu/workspace/test/test_model/1
2018-02-08 16:28:02.755299: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-02-08 16:28:02.795329: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:155] Restoring SavedModel bundle.
2018-02-08 16:28:02.820146: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] Running LegacyInitOp on SavedModel bundle.
2018-02-08 16:28:02.832832: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 89481 microseconds.
2018-02-08 16:28:02.834804: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: voice version: 1}
2018-02-08 16:28:02.836855: I tensorflow_serving/model_servers/main.cc:290] Running ModelServer at 0.0.0.0:8500 ...

4、客户端代码

from __future__ import print_function
from grpc.beta import implementations
import tensorflow as tf

from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
import numpy as np

tf.app.flags.DEFINE_string('server', 'localhost:8500',
                           'PredictionService host:port')
tf.app.flags.DEFINE_string('vocie', '', 'path to voice in wav format')
FLAGS = tf.app.flags.FLAGS

def get_melgram(path):
    melgram = .... # 这里省略
    return melgram

def main(_):
    host, port = FLAGS.server.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
    # Send request

    # See prediction_service.proto for gRPC request/response details.
    data = get_melgram("T_1000001.wav")
    data = data.astype(np.float32)

    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'voice' # 这个name跟tensorflow_model_server  --model_name="voice" 对应
    request.model_spec.signature_name = 'voice_classification' # 这个signature_name  跟signature_def_map 对应
    request.inputs['voice'].CopyFrom(
          tf.contrib.util.make_tensor_proto(data, shape=[1, 1, 96, 89])) # shape跟 keras的model.input类型对应
    result = stub.Predict(request, 10.0)  # 10 secs timeout
    print(result)


if __name__ == '__main__':
  tf.app.run()

客户端跑出的结果是:

outputs {
  key: "scores"
  value {
    dtype: DT_FLOAT
    tensor_shape {
      dim {
        size: 1
      }
      dim {
        size: 2
      }
    }
    float_val: 0.0341101661325
    float_val: 0.965889811516
  }
}

float_val: 0.0341101661325float_val: 0.965889811516就是我们需要的结果。

keras模型转 tensorflow模型的一些说明

1、 keras 保存模型

可以使用model.save(filepath)将Keras模型和权重保存在一个HDF5文件中,该文件将包含:

  • 模型的结构,以便重构该模型
  • 模型的权重
  • 训练配置(损失函数,优化器等)
  • 优化器的状态,以便于从上次训练中断的地方开始

当然这个 HDF5 也可以是用下面的代码生成

from keras.callbacks import ModelCheckpoint

checkpoint_filepath = 'weights.hdf5'
checkpointer = ModelCheckpoint(filepath=checkpoint_filepath, verbose=1, save_best_only=True)

2、 keras 加载模型

keras 加载模型像下面这样子(中间部分代码省略了):


from keras.models import Sequential, Model

model = Sequential()
.....
model.compile(loss='categorical_crossentropy',
       optimizer='xxx', # 用实际算法情况替换这里的xxx
       metrics=['xxx'])
model.summary()
model.load_weights("xxx.h5") # 加载keras模型(一个HDF5文件)

keras 模型转tensorflow serving 模型的一些坑

希望能让新手少走一些弯路

坑1:过时的生成方法

有些方法已经过时了(例如下面这种):

from tensorflow_serving.session_bundle import exporter

export_path = ... # where to save the exported graph
export_version = ... # version number (integer)

saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input,
                                              scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(),
                    default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)

如果使用这种过时的方法,用tensorflow serving 跑模型的时候会提示:

WARNING:tensorflow:From test.py:107: Exporter.export (from tensorflow.contrib.session_bundle.exporter) is deprecated and will be removed after 2017-06-30.
Instructions for updating:
No longer supported. Switch to SavedModel immediately.

从warning中 显然可以知道这种方法要被抛弃了,不再支持这种方法了, 建议我们转用 SaveModel方法。

填坑大法: 使用 SaveModel

def save_model_to_serving(model, export_version, export_path='prod_models'):
    print(model.input, model.output)
    signature = tf.saved_model.signature_def_utils.predict_signature_def(                                                                        
        inputs={'voice': model.input}, outputs={'scores': model.output})
    export_path = os.path.join(
        tf.compat.as_bytes(export_path),
        tf.compat.as_bytes(str(export_version)))
    builder = tf.saved_model.builder.SavedModelBuilder(export_path)
    legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
    builder.add_meta_graph_and_variables(
        sess=K.get_session(),                                                                                                                    
        tags=[tf.saved_model.tag_constants.SERVING],                                                                                             
        signature_def_map={                                                                                                                      
            'classification': signature,                                                                                                                     
        },
        legacy_init_op=legacy_init_op)
    builder.save()

 

你可能感兴趣的:(Tensorflow,Keras)