tensorflow流程

1 数据和预处理
2 确定模型结构
3 实现模型
3.1 输入占位符
3.2 向量层 emberding
3.3 卷积层和池化层
3.4 Dropout 层
3.5 score和predict
3.6 loss 和 Accuracy
3.7 可视化网络 tensorboard
4 train
4.1 实例化模型并尽可能减少损失(优化器,梯度的计算)
4.2 Summaries 摘要
4.3 Checkpointing 检查点
saver = tf.train.Saver(tf.all_variables())
4.4 Initializing the variables初始化变量
4.5Defining a single training step定义单一training步骤
4.6 Training loop训练循环
5 Visualizing Results in TensorBoard 在TensorBoard中可视化结果
可以使用以下代码快速跑起来 https://github.com/dennybritz/cnn-text-classification-tf(文本分类)


如下给出最简单的tensorflow代码流程
import numpy
import tensorflow as tf

learning_rate = 0.01
training_epochs = 1000
display_step = 50

# Training Data
train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
                         7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
                         2.827,3.465,1.65,2.904,2.42,2.94,1.3])
n_samples = train_X.shape[0]

# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")

W = tf.Variable([0.3], name="weight")
b = tf.Variable([-0.3], name="bias")

# Construct a linear model
pred = tf.add(tf.multiply(X, W), b)

# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
# Gradient descent
#  Note, minimize() knows to modify W and b because Variable objects are trainable=True by default
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
sess = tf.Session()
# with tf.Session() as sess:
sess.run(init)

# Fit all training data
for epoch in range(training_epochs):
    for (x, y) in zip(train_X, train_Y):
        sess.run(optimizer, feed_dict={X: x, Y: y})
# 查看最终训练的结果
print(sess.run(W))
print(sess.run(b))
predict_result = []
for (x, y) in zip(train_X, train_Y):
    predict_result.append(sess.run(pred, feed_dict={X: x, Y: y}))
predict_result_list = numpy.asarray(predict_result)[:,0]
for (label_y, predict_y) in zip(train_Y, predict_result_list):
    print("label_y is %f and predict_y is %f" %(label_y, predict_y))
参考http://blog.csdn.net/u012052268/article/details/77862202#36-loss-%E5%92%8C-accuracy

https://blog.csdn.net/u014595019/article/details/52677412

https://blog.csdn.net/Toormi/article/details/53609245

https://blog.csdn.net/geyunfei_/article/details/78782804

你可能感兴趣的:(tensorflow)