如何解决tensorflow restore model恢复模型经常出错,模型无法挪动位置的问题

如何解决tensorflow restore model恢复模型经常出错,模型无法挪动位置的问题

如果你想保存使用tensorflow训练过的模型的话,你大概需要用tensor.saver,但是它却又很多麻烦:

  1. 如果你将此模型拷贝到另一个目录,再加载到内存中进行训练的话,将无法使用。
  2. 还有就是你的加载模型的代码和原来训练它使用的代码计算图如果稍有不同,则会导致变量找不到,又无法加载。

第一条问题是因为它的checkpoint里面记录的是你最初生成的模型的绝对地址,

model_checkpoint_path: "E:/mylab/sdsdata/test/sample/cat2/models/model_warehouse/124154_910_460_1000_0.000500_2018-10-23-08-18-11/finemodel"
all_model_checkpoint_paths: "E:/mylab/sdsdata/test/sample/cat2/models/model_warehouse/124154_910_460_1000_0.000500_2018-10-23-08-18-11/finemodel"

我们可以使用自己手动保存模型的办法,使其可以一次训练,到处加载。

保存模型的代码如下:

 _, sumarry, current_loss, wh1, wh2, w_out, b_b1, b_b2, b_out = sess.run(
                [train_op, merged_summary_op, loss_op, weights['h1'], weights['h2'], weights['out'], biases['b1'],
                 biases['b2'], biases['out']], feed_dict={X: batch_x, Y: batch_y})
            # summary_writer.add_summary(sumarry, step)

            if min_loss > current_loss:
                min_loss = current_loss
                print("find new min loss : %f" % min_loss)
                # save all the weights and bias
                if min_loss < 2.0:  # avoid save model too much time during the first 1000 steps
                    np.save("%s/wh1.npy" % model_full_dir, wh1)
                    np.save("%s/wh2.npy" % model_full_dir, wh2)
                    np.save("%s/w_out.npy" % model_full_dir, w_out)
                    np.save("%s/b_b1.npy" % model_full_dir, b_b1)
                    np.save("%s/b_b2.npy" % model_full_dir, b_b2)
                    np.save("%s/b_out.npy" % model_full_dir, b_out)

读取模型,加载它用来使用的代码如下:

def estimate(input_x, input_y, _model_dir):
    # Store layers weight & bias
    w_h1 = np.load("%s/wh1.npy" % _model_dir)
    w_h2 = np.load("%s/wh2.npy" % _model_dir)
    w_out = np.load("%s/w_out.npy" % _model_dir)
    b_b1 = np.load("%s/b_b1.npy" % _model_dir)
    b_b2 = np.load("%s/b_b2.npy" % _model_dir)
    b_out = np.load("%s/b_out.npy" % _model_dir)
    # Network Parameters
    num_input = 800  # MNIST data input (img shape: 28*28)
    num_classes = 2  # MNIST total classes (0-9 digits)

    # tf Graph input
    X = tf.placeholder("float", [None, num_input])
    Y = tf.placeholder("float", [None, num_classes])
    weights = {
        'h1': tf.Variable(w_h1),
        'h2': tf.Variable(w_h2),
        'out': tf.Variable(w_out)
    }
    biases = {
        'b1': tf.Variable(b_b1),
        'b2': tf.Variable(b_b2),
        'out': tf.Variable(b_out)
    }
    init = tf.global_variables_initializer()
    logits = neural_net(X, weights, biases)
    prediction = tf.nn.softmax(logits)
    correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
    pred_result = tf.argmax(prediction, 1)
    with tf.Session() as test_sess:
        test_sess.run(init)
        test_acc, pred_result_value = test_sess.run([accuracy, pred_result],
                                                    feed_dict={X: input_x, Y: input_y})
        test_sess.close()
    return test_acc, pred_result_value

你可能感兴趣的:(机器学习)