tensorflow 入门

文章目录

    • 一.tensorflow的运行流程
      • 1 Tensor
      • 2 session
      • 3 Variable
      • 4 placehoder
      • 5 My First Demo
      • 6 定义神经层
      • 7 matplotlib可视化
      • 8 tensorboard的使用
      • 9 tensorboard 监察程序运行状况
      • 10 dropout
      • 11 卷积神经网络
    • 二、tensorflow代码框架总结
    • 三、常用函数集锦
      • `tf.nn.embedding_lookup`
      • `tf.variable_scope() 和tf.get_variable()`
      • `tf.device()`

先说两句题外话:
(1)安装某个版本的tensorflow的pip命令是pip install tensorflow==1.4.0 (这里用1.4.0作为例子)
(2)GPU版tensorflow安装:参考链接1 参考链接2 官方教程
在安装时会有几个坑:

  • 注意cudnn的版本要与cuda相匹配
  • tensorflow的版本号又要与cudnn相匹配,要不然就会报错,比如cudnn如果是5.0的话,tensorflow可以是1.2,但是不能是1.4因为1.4依赖的是cudnn6.0
  • pip安装tensorflow的时候会经常断,特别恶心,可以先从https://pypi.python.org/pypi/tensorflow-gpu/1.2.0网站上下载(ps:url里面的1.2.0就是版本,可以修改为你想找的版本),然后在用pip install 下载的文件名进行安装

更多入门轿车那个参考:tensorflow笔记 :常用函数说明

一.tensorflow的运行流程

tensorflow的运行流程主要有2步,分别是构造模型和训练。
在构造模型阶段,我们需要构建一个图(Graph)来描述我们的模型。所谓图,也可以理解为流程图,就是将数据的输入->中间处理->输出的过程表示出来,就像下面这样。


注意此时是不会发生实际运算的。而在模型构建完毕以后,会进入训练步骤。此时才会有实际的数据输入,梯度计算等操作。那么,如何构建抽象的模型呢?
这里就要提到tensorflow中的几个概念:Tensor,Variable,placeholder,而在训练阶段,则需要介绍Session。下面先解释一些上面的几个概念

1 Tensor

Tensor的意思是张量,不过按我的理解,其实就是指矩阵。也可以理解为tensorflow中矩阵的表示形式。Tensor的生成方式有很多种,最简单的就如

import tensorflow as tf # 在下面所有代码中,都去掉了这一行,默认已经导入
a = tf.zeros(shape=[1,2])

不过要注意,因为在训练开始前,所有的数据都是抽象的概念,也就是说,此时a只是表示这应该是个1*5的零矩阵,而没有实际赋值,也没有分配空间,所以如果此时print,就会出现如下情况:

print(a)
#===>Tensor("zeros:0", shape=(1, 2), dtype=float32)

只有在训练过程开始后,才能获得a的实际值

sess = tf.InteractiveSession()
print(sess.run(a))
#===>[[ 0.  0.]]

这边设计到Session概念,后面会提到

2 session

session,也就是会话。我的理解是,session是抽象模型的实现者。为什么之前的代码多处要用到session?因为模型是抽象的嘛,只有实现了模型以后,才能够得到具体的值。同样,具体的参数训练,预测,甚至变量的实际值查询,都要用到session,看后面就知道了.

tensorflow中的所有定义和函数都需要通过session.run之后才能真正运行

### 
session tutorial
### 
import tensorflow as tf

matrix1 = tf.constant([[3,3]])

matrix2 = tf.constant([[2],[2]])

product = tf.matmul(matrix1,matrix2)  #matrix multiply np.dot(m1,m2)

#method 1
sess = tf.Session()
result = sess.run(product)
print (result)
sess.close()

#method 2 with method will close the session automatically
with tf.Session() as sess:
    result2 = sess.run(product)
    print (result2)

3 Variable

故名思议,是变量的意思。一般用来表示图中的各计算参数,包括矩阵,向量等。例如,我要表示上图中的模型,那表达式就是

y=Relu(Wx+b)

(relu是一种激活函数,具体可见这里)这里W和b是我要用来训练的参数,那么此时这两个值就可以用Variable来表示。Variable的初始函数有很多其他选项,这里先不提,只输入一个Tensor也是可以的

定义变量需要

  • tf.Variable声明,
  • 并且变量需要global_variables_initializer最终完成定义,
  • 而最终变量生成需要借助session,run过之后才是真正的变量

global_variables_initializer什么时候需要用?

# variable tutotial



import tensorflow as tf

state = tf.Variable(0,name = 'counter')
#print (state.name)
one = tf.constant(1)

new_value = tf.add(state,one)  ##add
update = tf.assign(state,new_value)  ##assignment

init = tf.global_variables_initializer() ##must have if define variable

with tf.Session() as sess:
    sess.run(init)
    for _ in range(3):
        sess.run(update)
        print (sess.run(state))

4 placehoder

又叫占位符,同样是一个抽象的概念。用于表示输入输出数据的格式。告诉系统:这里有一个值/向量/矩阵,现在我没法给你具体数值,不过我正式运行的时候会补上的!例如上式中的x和y。因为没有具体数值,所以只要指定尺寸即可

在执行时候才赋值

#placehoder
import  tensorflow as tf

input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)

output = tf.multiply(input1,input2)

with tf.Session() as sess:
    print (sess.run(output,feed_dict={input1:[7.],input2:[2.]}))

5 My First Demo

先运行下我们的第一个demo:

import tensorflow as tf
import numpy as np

#creat data
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data*0.1+0.3

#creat tensorflow structure start #
Weights = tf.Variable(tf.random_uniform([1],-1.0,1.0))
biases = tf.Variable(tf.zeros([1]))

y = Weights*x_data+biases

loss = tf.reduce_mean(tf.square(y-y_data))   #cost function
optimizer = tf.train.GradientDescentOptimizer(0.5)  #with gradient decent method & 0.5 is the learning rate
train = optimizer.minimize(loss)  #minimize the cost funtion

init = tf.global_variables_initializer()  ##???????????
#creat tensorflow structure end #

sess = tf.Session()
sess.run(init)  #Very Important ---activate the netrul network

for step in range(201):
    sess.run(train)
    if step % 20 == 0:
        print (step,sess.run(Weights),sess.run(biases))

6 定义神经层

tensorflow 入门_第1张图片

#placehoder
import  tensorflow as tf
import numpy as np

def add_layer (inputs,in_size,out_size,activation_function=None):#activation_function没有激活函数就相当于线性函数
    Weithts = tf.Variable(tf.random_normal([in_size,out_size]))
    biases = tf.Variable(tf.zeros([1,out_size])+0.1)

    Wx_plus_b = tf.matmul(inputs,Weithts)+biases
    if activation_function is None :
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)

    return  outputs

def neural_network():
    #生成数据
    x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
    noise = np.random.normal(0, 0.05, x_data.shape)
    y_data = np.square(x_data) - 0.5 + noise

    # 定义两层
    xs = tf.placeholder(tf.float32, [None, 1])  # None用来限制用例个数
    ys = tf.placeholder(tf.float32, [None, 1])
    l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
    prediction = add_layer(l1, 10, 1, activation_function=None)

    #定义递归下降
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)


    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)

    for i in range(1000):
        sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
        if i % 50 == 0:
            print(sess.run(loss, feed_dict={xs: x_data, ys: y_data}))


# x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
# init = tf.global_variables_initializer()
# sess = tf.Session()
# try:
#     print(sess.run(tf.convert_to_tensor(x_data)))
#
# except Exception as e:
#     print (e)

if  __name__  == '__main__':
  neural_network()



7 matplotlib可视化

plt.ion() 用于连续显示。因为plt.show()之后就会暂停。

# View more python learning tutorial on my Youtube and Youku channel!!!

# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
# Youku video tutorial: http://i.youku.com/pythontutorial

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

def add_layer(inputs, in_size, out_size, activation_function=None):
    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
    Wx_plus_b = tf.matmul(inputs, Weights) + biases
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs

# Make up some real data
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise

##plt.scatter(x_data, y_data)
##plt.show()

# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1])
# add hidden layer
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, activation_function=None)

# the error between prediction and real data
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
# important step
sess = tf.Session()
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess.run(init)

# plot the real data
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x_data, y_data)
plt.ion()
plt.show()


for i in range(1000):
    # training
    sess.run(train_step, feed_dict={
     xs: x_data, ys: y_data})
    if i % 50 == 0:
        # to visualize the result and improvement
        try:
            ax.lines.remove(lines[0])
        except Exception:
            pass
        prediction_value = sess.run(prediction, feed_dict={
     xs: x_data})
        # plot the prediction
        lines = ax.plot(x_data, prediction_value, 'r-', lw=5)
        plt.pause(1)

8 tensorboard的使用

demo 代码:

import tensorflow as tf

with tf.name_scope('graph') as scope:
     matrix1 = tf.constant([[3., 3.]],name ='matrix1')  #1 row by 2 column
     matrix2 = tf.constant([[2.],[2.]],name ='matrix2') # 2 row by 1 column
     product = tf.matmul(matrix1, matrix2,name='product')
  
sess = tf.Session()

writer = tf.summary.FileWriter("logs/", sess.graph)

init = tf.global_variables_initializer()

sess.run(init)

这里相对于第一篇tensorflow多了一点东西,tf.name_scope函数是作用域名,上述代码斯即在graph作用域op下,又有三个op(分别是matrix1,matrix2,product),用tf函数内部的name参数命名,这样会在tensorboard中显示,具体图像还请看下面。

运行上面的代码,查询当前目录,就可以找到一个新生成的文件,已命名为logs,我们需在终端上运行tensorboard,生成本地链接,具体看我截图,运行tensorboards --logdir logs指令,就可以生成一个链接,复制那个链接,在google浏览器(我试过火狐也行)粘贴显示
(ps:如果使用pycharm的话,先运行项目,然后直接使用里面的terminal运行tensorboards --logdir logs,复制链接就能在浏览器里面打开了)

具体运行过程如下(中间的警告请忽略,我把上面的代码命名为1.py运行的)

可以看到最后一行出现了链接,复制那个链接,推荐用google浏览器打开(火狐我试过也行),也可以直接打开链接

想使用该功能,总要的一句命令就是在想添加显示的地方插入:
with tf.name_scope('scope名字')然后在任何想显示名称的函数后面通过添加name='名称'添加

代码:

# View more python learning tutorial on my Youtube and Youku channel!!!

# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
# Youku video tutorial: http://i.youku.com/pythontutorial

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf


def add_layer(inputs, in_size, out_size, activation_function=None):
    # add one more layer and return the output of this layer
    with tf.name_scope('layer'):
        with tf.name_scope('weights'):
            Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
        with tf.name_scope('biases'):
            biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_b = tf.add(tf.matmul(inputs, Weights), biases)
        if activation_function is None:
            outputs = Wx_plus_b
        else:
            outputs = activation_function(Wx_plus_b, )
        return outputs


# define placeholder for inputs to network
with tf.name_scope('inputs'):
    xs = tf.placeholder(tf.float32, [None, 1], name='x_input')
    ys = tf.placeholder(tf.float32, [None, 1], name='y_input')

# add hidden layer
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, activation_function=None)

# the error between prediciton and real data
with tf.name_scope('loss'):
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                                        reduction_indices=[1]))

with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

sess = tf.Session()

# tf.train.SummaryWriter soon be deprecated, use following
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:  # tensorflow version < 0.12
    writer = tf.train.SummaryWriter('logs/', sess.graph)
else: # tensorflow version >= 0.12
    writer = tf.summary.FileWriter("logs/", sess.graph)

# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess.run(init)

# direct to the local dir and run this in terminal:
# $ tensorboard --logdir=logs

效果图

运行方式同样如上,即程序运行完毕之后, 会产生logs目录 , 使用命令 tensorboard --logdir logs

9 tensorboard 监察程序运行状况

#重要命令如下:
tf.summary.histogram(layer_name + '/weights', Weights) # tensorflow >= 0.12  变量观察
tf.summary.scalar('loss', loss) # tensorflow >= 0.12  误差观察
merged = tf.summary.merge_all() # tensorflow >= 0.12  使用前先merge
writer = tf.summary.FileWriter("logs/", sess.graph) # tensorflow >=0.12  设置写入的路径
rs = sess.run(merged,feed_dict={
     xs:x_data,ys:y_data})   #运行所有的summary
      writer.add_summary(rs, i)

这里要注意的是,目前的tensorflow好像需要histogram、scalar同时存在才能运行,以后在测试!!!
完整代码:

# View more python learning tutorial on my Youtube and Youku channel!!!

# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
# Youku video tutorial: http://i.youku.com/pythontutorial

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
import numpy as np


def add_layer(inputs, in_size, out_size, n_layer, activation_function=None):
    # add one more layer and return the output of this layer
    layer_name = 'layer%s' % n_layer
    with tf.name_scope(layer_name):
        with tf.name_scope('weights'):
            Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
            tf.summary.histogram(layer_name + '/weights', Weights)
        with tf.name_scope('biases'):
            biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')
            tf.summary.histogram(layer_name + '/biases', biases)
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_b = tf.add(tf.matmul(inputs, Weights), biases)
        if activation_function is None:
            outputs = Wx_plus_b
        else:
            outputs = activation_function(Wx_plus_b, )
        tf.summary.histogram(layer_name + '/outputs', outputs)
    return outputs


# Make up some real data
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise

# define placeholder for inputs to network
with tf.name_scope('inputs'):
    xs = tf.placeholder(tf.float32, [None, 1], name='x_input')
    ys = tf.placeholder(tf.float32, [None, 1], name='y_input')

# add hidden layer
l1 = add_layer(xs, 1, 10, n_layer=1, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, n_layer=2, activation_function=None)

# the error between prediciton and real data
with tf.name_scope('loss'):
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                                        reduction_indices=[1]))
    tf.summary.scalar('loss', loss)

with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

sess = tf.Session()
merged = tf.summary.merge_all()

writer = tf.summary.FileWriter("logs/", sess.graph)

init = tf.global_variables_initializer()
sess.run(init)

for i in range(1000):
    sess.run(train_step, feed_dict={
     xs: x_data, ys: y_data})
    if i % 50 == 0:
        result = sess.run(merged,
                          feed_dict={
     xs: x_data, ys: y_data})
        writer.add_summary(result, i)

# direct to the local dir and run this in terminal:
# $ tensorboard --logdir logs

运行效果图:


10 dropout

dropout的具体作用详见如下:

  • 理解dropout
tf.nn.dropout(Wx_plus_b, keep_prob)  #keep_prob概率会将结果扔掉

完整代码:

# View more python learning tutorial on my Youtube and Youku channel!!!

# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
# Youku video tutorial: http://i.youku.com/pythontutorial

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer

# load data
digits = load_digits()
X = digits.data
y = digits.target
y = LabelBinarizer().fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3)


def add_layer(inputs, in_size, out_size, layer_name, activation_function=None, ):
    # add one more layer and return the output of this layer
    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, )
    Wx_plus_b = tf.matmul(inputs, Weights) + biases
    # here to dropout
    Wx_plus_b = tf.nn.dropout(Wx_plus_b, keep_prob)
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b, )
    tf.summary.histogram(layer_name + '/outputs', outputs)
    return outputs


# define placeholder for inputs to network
keep_prob = tf.placeholder(tf.float32)
xs = tf.placeholder(tf.float32, [None, 64])  # 8x8
ys = tf.placeholder(tf.float32, [None, 10])

# add output layer
l1 = add_layer(xs, 64, 50, 'l1', activation_function=tf.nn.tanh)
prediction = add_layer(l1, 50, 10, 'l2', activation_function=tf.nn.softmax)

# the loss between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
                                              reduction_indices=[1]))  # loss
tf.summary.scalar('loss', cross_entropy)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

sess = tf.Session()
merged = tf.summary.merge_all()
# summary writer goes in here
train_writer = tf.summary.FileWriter("logs/train", sess.graph)
test_writer = tf.summary.FileWriter("logs/test", sess.graph)

# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess.run(init)
for i in range(500):
    # here to determine the keeping probability
    sess.run(train_step, feed_dict={
     xs: X_train, ys: y_train, keep_prob: 0.5})
    if i % 50 == 0:
        # record loss
        train_result = sess.run(merged, feed_dict={
     xs: X_train, ys: y_train, keep_prob: 1})
        test_result = sess.run(merged, feed_dict={
     xs: X_test, ys: y_test, keep_prob: 1})
        train_writer.add_summary(train_result, i)
        test_writer.add_summary(test_result, i)

11 卷积神经网络

# View more python tutorial on my Youtube and Youku channel!!!

# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
# Youku video tutorial: http://i.youku.com/pythontutorial

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# number 1 to 10 data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

def compute_accuracy(v_xs, v_ys):
    global prediction
    y_pre = sess.run(prediction, feed_dict={
     xs: v_xs, keep_prob: 1})
    correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    result = sess.run(accuracy, feed_dict={
     xs: v_xs, ys: v_ys, keep_prob: 1})
    return result

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

def conv2d(x, W):
    # stride [1, x_movement, y_movement, 1]
    # Must have strides[0] = strides[3] = 1
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
    # stride [1, x_movement, y_movement, 1]
    return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')

# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 784])/255.   # 28x28
ys = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32)
x_image = tf.reshape(xs, [-1, 28, 28, 1])
# print(x_image.shape)  # [n_samples, 28,28,1]

## conv1 layer ##
W_conv1 = weight_variable([5,5, 1,32]) # patch 5x5, in size 1, out size 32
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # output size 28x28x32
h_pool1 = max_pool_2x2(h_conv1)                                         # output size 14x14x32

## conv2 layer ##
W_conv2 = weight_variable([5,5, 32, 64]) # patch 5x5, in size 32, out size 64
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) # output size 14x14x64
h_pool2 = max_pool_2x2(h_conv2)                                         # output size 7x7x64

## fc1 layer ##
W_fc1 = weight_variable([7*7*64, 1024])
b_fc1 = bias_variable([1024])
# [n_samples, 7, 7, 64] ->> [n_samples, 7*7*64]
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# fix overfitting problem
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

## fc2 layer ##
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)


# the error between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
                                              reduction_indices=[1]))       # loss
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

sess = tf.Session()
# important step
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess.run(init)

for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={
     xs: batch_xs, ys: batch_ys, keep_prob: 0.5})
    if i % 50 == 0:
        print(compute_accuracy(
            mnist.test.images[:1000], mnist.test.labels[:1000]))

二、tensorflow代码框架总结

# load data


# define placeholder for inputs to network

# add output layer

# the loss between prediction and real data

# summary_merge & summary writer goes in here

# tf.initialize_all_variables()

#run neural networks goes in here

三、常用函数集锦

tf.nn.embedding_lookup

通过index查找词向量矩阵中相应的词向量
可以从demo中解释学习:

#!/usr/bin/env/python
# coding=utf-8
import tensorflow as tf
import numpy as np

input_ids = tf.placeholder(dtype=tf.int32, shape=[None])

embedding = tf.Variable(np.identity(5, dtype=np.int32))
input_embedding = tf.nn.embedding_lookup(embedding, input_ids)

sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
print(embedding.eval())
print(sess.run(input_embedding, feed_dict={
     input_ids:[1, 2, 3, 0, 3, 2, 1]}))


代码中先使用palceholder定义了一个未知变量input_ids用于存储索引,和一个已知变量embedding,是一个5*5的对角矩阵。
简单的讲就是根据input_ids中的id,寻找embedding中的对应元素。比如,input_ids=[1,3,5],则找出embedding中下标为1,3,5的向量组成一个矩阵返回。
结果如下:


embedding = [[1 0 0 0 0]
             [0 1 0 0 0]
             [0 0 1 0 0]
             [0 0 0 1 0]
             [0 0 0 0 1]]
input_embedding = [[0 1 0 0 0]
                   [0 0 1 0 0]
                   [0 0 0 1 0]
                   [1 0 0 0 0]
                   [0 0 0 1 0]
                   [0 0 1 0 0]
                   [0 1 0 0 0]]

特别的:
如果将input_ids改写成下面的格式:

input_embedding = tf.nn.embedding_lookup(embedding, input_ids)
print(sess.run(input_embedding, feed_dict={input_ids:[[1, 2], [2, 1], [3, 3]]}))

输出结果就会变成如下的格式:

[[[0 1 0 0 0]
  [0 0 1 0 0]]
 [[0 0 1 0 0]
  [0 1 0 0 0]]
 [[0 0 0 1 0]
  [0 0 0 1 0]]]

对比上下两个结果不难发现,相当于在np.array中直接采用下标数组获取数据。需要注意的细节是返回的tensor的dtype和传入的被查询的tensor的dtype保持一致;和ids的dtype无关。

tf.variable_scope() 和tf.get_variable()

参考连接: 1.共享变量 | 2.TensorFlow 使用变量共享

tf.device()

通常,如果你的TensorFlow版本是GPU版本的,而且你的电脑上配置有符合条件的显卡,那么在不做任何配置的情况下,模型是默认运行在显卡下的。运行代码将会提示以下内容:
tensorflow 入门_第2张图片
如果需要切换成CPU运算,可以调用tf.device(device_name)函数,其中device_name格式如/cpu:0其中的0表示设备号,TF不区分CPU的设备号,设置为0即可。GPU区分设备号\gpu:0和\gpu:1表示两张不同的显卡。
在一些情况下,我们即使是在GPU下跑模型,也会将部分Tensor储存在内存里,因为这个Tensor可能太大了,显存不够放,相比于显存,内存一般大多了,于是这个时候就常常人为指定为CPU设备。这种形式我们在一些代码中能见到。如:

with tf.device('/cpu:0'):
    build_CNN() # 此时,这个CNN的Tensor是储存在内存里的,而非显存里。

需要注意的是,这个方法会减少显存的负担,但是从内存把数据传输到显存中是非常慢的,这样做常常会减慢速度。

你可能感兴趣的:(机器学习,tensorflow)