导出和加载模型的tensorflow API版本要一致!!
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
from tensorflow.saved_model.signature_def_utils import predict_signature_def
from tensorflow.saved_model import tag_constants
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, [None, 784], name="Input") # 为输入op添加命名"Input"
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), 1))
tf.identity(y, name="Output") # 为输出op命名为"Output"
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
tf.global_variables_initializer().run()
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
train_step.run({x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))
# 将模型保存到文件
# 简单方法:
tf.saved_model.simple_save(sess,
"./model_simple",
inputs={"Input": x},
outputs={"Output": y})
代码解析:
x = tf.placeholder(tf.float32, [None, 784], name="Input")
# 为输入op添加命名"Input" 这里是为输入op进行命名,当然也可以不命名,系统会默认给一个名称"Placeholder",当我们需要引用多个op的时候,给每个op一个命名,确实方便我们后面的使用。
tf.identity(y, name="Output")
# 为输出op命名为"Output" 使用tf.identity
为输出tensor命名。
代码中给出了两种方法进行模型保存。复杂方法较简单方法的最大优势在于——可以自己定义tag,在签名的定义上更加灵活。
tag的作用: 一个模型可以包含不同的MetaGraphDef,比如你想保存graph的CPU版本和GPU版本,或者你想区分训练和发布版本。这个时候tag就可以用来区分不同的MetaGraphDef,加载的时候能够根据tag来加载模型的不同计算图。在simple_save方法中,系统会给一个默认的tag: “serve”,也可以用tag_constants.SERVING这个常量。
import numpy as np
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["serve"], "./model_simple")
graph = tf.get_default_graph()
input = np.expand_dims(mnist.test.images[0], 0)
x = sess.graph.get_tensor_by_name('Input:0')
y = sess.graph.get_tensor_by_name('Output:0')
batch_xs, batch_ys = mnist.test.next_batch(1)
scores = sess.run(y,
feed_dict={x: batch_xs})
print("predict: %d, actual: %d" % (np.argmax(scores, 1), np.argmax(batch_ys, 1)))
Java需要添加maven依赖:
org.tensorflow
tensorflow
1.12.0
org.tensorflow
proto
1.12.0
ConfigProto configProto = ConfigProto.newBuilder()
.setAllowSoftPlacement(true)
.build();
SavedModelBundle model = SavedModelBundle.loader("/Users/liangshu/IdeaProjects/krs-mcs/data-krs-mcs-server/src/main/resources/model/tensorflow")
.withConfigProto(configProto.toByteArray())
.withTags("serve")
.load();
SignatureDef modelSig = MetaGraphDef.parseFrom(model.metaGraphDef()).getSignatureDefOrThrow("serving_default");
int numInputs = modelSig.getInputsCount();
String inputTensorName = modelSig.getInputsMap().get("Input").getName();
String outputTensorName = modelSig.getOutputsMap().get("Output").getName();
System.out.println(String.format("numInputs: %d, inputTensorName: %s, outputTensor: %s", numInputs, inputTensorName, outputTensorName));
Session tfSession = model.session();
float[][] a = new float[1][784];
//for ()
a[0] = new float[32];
try (Tensor> t = Tensor.create(a)) {
List> out = tfSession.runner().feed("Input", t).fetch("Output").run();
for (Tensor s : out) {
float[][] rs = new float[1][10];
s.copyTo(rs);
for (float i : rs[0])
System.out.println(i);
}
}
WARNING: Resources consumed by the Tensor object must be explicitly freed by invoking the close() method when the object is no longer needed. For example, using a try-with-resources block:
- 在Java中,TensorFlow要求显式关闭相关资源,包括
Graph
、Session
、Tensor
等, 对象资源当使用完后需要显式关闭,可以使用try-with-resources
,来简化关闭操作。否则会出现内存泄露- 在Java中,Graph和Session是线程安全的,但Tensor不是线程安全的