TensorFlow Tutorial: Deep Learning for Beginner’s

TensorFlow Tutorial: Deep Learning for Beginner’s

什么是TensorFlow?

  • Tensorflow是一个基于数据流图的深度学习开源库,用于构建神经网络。
  • tensor是指将数据表示为多维数组,flow表示对张量执行的一系列操作。

Tensor的概念:

  • 多维数组,用于表示高维数据
  • 0维:标量scalar;1维:向量vector;2维:矩阵matrix

Tensorflow编程基础

  • 主要包括两个步骤:(1)构建计算图;(2)执行计算图
  1. Building a Computational Graph
    • 节点nodes:操作operation(+,-,*,/,etc.)
    • 边edge:数据流,以tensor作为输入与输出
      备注:不同节点构成的计算子图可以并行计算,从而提高计算效率
  2. Running a Computational Graph
    • Session:封装TensorFlow运行时的控制和状态(操作、顺序),并将已计算的运算结果传递给下一个操作。
    • 创建session,运行图,关闭session

TensorFlow张量类型

  • Constants:常量
    • tf.constant(value,dtype=None,shape=None,name=‘Const’,verify_shape=False)
  • Variables:变量
    • 向图中添加可训练的参数或节点
    • 需要初始化tf.global_variables_initializer(),可重新assign新的值
    • var = tf.Variable( value, dtype=None, name=None,verify_shape=False, trainable=True, …)
  • Placeholder:占位符
    • 确保后续的数据会feed
    • tf.placeholder(dtype,shape=None,name=None
    • tf.placeholder_with_default(input,shape,name=None)

TensorFlow模型的保存与加载

  1. 模型组成:
    • 元数据图(meta graph):它保存了tensorflow完整的网络图结构,以.META为扩展名。
    • 检查点文件(checkpoint file):二进制文件,包含模型的训练参数,以mymodel.data-00000-of-00001;mymodel.index命名
  2. 模型的保存:
    • saver = tf.train.Saver()
    • saver = tf.train.Saver(max_to_keep=4, keep_checkpoint_every_n_hours=2) #saves a model every 2 hours and maximum 4 latest models are saved.
    • saver.save(sess, ‘my-test-model’,global_step=1000),在1000个epoch后保存模型
    • saver.save(sess, ‘my-model’, global_step=step, write_meta_graph=False) 网络结构只保存一次
  3. 模型的导入:
    • 创建网络结构图:saver = tf.train.import_meta_graph(‘my_test_model-1000.meta’)
    • 加载参数:new_saver.restore(sess, tf.train.latest_checkpoint(’./’))
  4. 恢复训练模式:
    • 注意:如果有占位符,那么就需要将数据传入占位符中。当保存tensorflow模型的时候,占位符的数据不会被保存(占位符本身的变量是被保存的)。
    • graph = tf.get_default_graph()
    • w1 = graph.get_tensor_by_name(“w1:0”)
    • op_to_restore = graph.get_tensor_by_name(“op_to_restore:0”)
    • 用with tf.name_scope(‘module_name’)方式的变量名访问方式为“‘module_name/变量名:0’”,若没有则为“变量名:0”。变量名由“name”定义。

以下是一个简单的线性模型,包括模型的定义、训练、存储、可视化;模型的恢复;继续训练几个部分:

# Creating a simple linear model

import tensorflow as tf

tf.reset_default_graph()

# Input placeholders
with tf.name_scope('input'):
    x = tf.placeholder(tf.float32, name = "x")
    y = tf.placeholder(tf.float32, name = "y")

# Model parameter variable
with tf.name_scope('model'):
    W = tf.Variable([1.], tf.float32, name = "W")
    b = tf.Variable([1.], tf.float32, name = "b") 
    pred = tf.add(tf.multiply(W,x),b, name="pred")

# Loss function
with tf.name_scope('loss'):
    loss = tf.reduce_sum(tf.square(pred-y), name = "loss") #Sum of Squared Error

#Creating an instance of gradient descent optimizer
with tf.name_scope('train'):
    optimizer = tf.train.GradientDescentOptimizer(0.01) 
    train = optimizer.minimize(loss)   

#Save the linear model
with tf.name_scope('save'):
    saver = tf.train.Saver()
    tf.add_to_collection('train_op', train)

#Visualize the graph
with tf.name_scope('visualization'):
    loss_summary = tf.summary.scalar("loss", loss)
    #merged_summary=tf.summary.merge([loss_summary])
    merged_summary = tf.summary.merge_all()

# Running the Computational Graph
with tf.Session() as sess:
    #initialization
    sess.run(tf.global_variables_initializer()) 
    
    # visualize the model
    writer = tf.summary.FileWriter("model", tf.get_default_graph())

    for i in range(100):
        loss_, _, summary  = sess.run([loss, train, merged_summary], {x:[1., 2., 3., 4.], y:[2., 4., 6., 8.]})
        writer.add_summary(summary, i)
        saver.save(sess, 'model\model', global_step=100) 

    print (sess.run([W, b]),loss_)
    writer.close()
    sess.close()
# Load the simple linear model and utilize it to predict
import tensorflow as tf
with tf.Session() as sess:
  new_saver = tf.train.import_meta_graph('model\model-100.meta')
  new_saver.restore(sess, 'model\model-100') 
  print (sess.run('model/W:0'))
  pred_ = sess.run('model/pred:0', feed_dict = {'input/x:0' : [1., 2., 3., 4.]})
  print (pred_)
# Load the simple linear model and carry out the subsequent training
import tensorflow as tf
with tf.Session() as sess:
  new_saver = tf.train.import_meta_graph('model\model-100.meta')
  new_saver.restore(sess, 'model\model-100') 
  train_op = tf.get_collection('train_op')[0]
  for i in range(100):
       loss_, _ = sess.run(['loss/loss:0', train_op], {'input/x:0':[1., 2., 3., 4.], 'input/y:0':[2., 4., 6., 8.]}) 
  print (sess.run(['model/W:0', 'model/b:0']), loss_)

参考资料:

  1. TensorFlow Tutorial – Deep Learning Using TensorFlow, by Ashish Bakshi
  2. 简单完整地讲解tensorflow模型的保存和恢复, by liangyhgood

你可能感兴趣的:(coding)