《Tensorflow 从基础到实战》02 神经网络实现与CNN卷积神经网络的分类任务

书山有路勤为径,学海无涯苦作舟

星光不负赶路人

1.神经网络分类任务

1.1基础结构

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("data/", one_hot=True)

Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz

《Tensorflow 从基础到实战》02 神经网络实现与CNN卷积神经网络的分类任务_第1张图片
单层神经网络
《Tensorflow 从基础到实战》02 神经网络实现与CNN卷积神经网络的分类任务_第2张图片

神经网络是一个层次结构,帮助完成特征提取,寻找最合适的中间特征。

1.2 构建网络

参数设置

隐层单元个数为50,表示第一次要把784个像素点映射成一个50个特征

numClasses = 10 
inputSize = 784 
numHiddenUnits = 50 
trainingIterations = 10000 
batchSize = 100 
X = tf.placeholder(tf.float32, shape = [None, inputSize])
y = tf.placeholder(tf.float32, shape = [None, numClasses])

参数初始化
2层就需要两个W、B

W1 = tf.Variable(tf.truncated_normal([inputSize, numHiddenUnits], stddev=0.1))
B1 = tf.Variable(tf.constant(0.1), [numHiddenUnits])
W2 = tf.Variable(tf.truncated_normal([numHiddenUnits, numClasses], stddev=0.1))
B2 = tf.Variable(tf.constant(0.1), [numClasses])

网络结构
定义两层的输出与激活函数relu

hiddenLayerOutput = tf.matmul(X, W1) + B1
hiddenLayerOutput = tf.nn.relu(hiddenLayerOutput)
finalOutput = tf.matmul(hiddenLayerOutput, W2) + B2
finalOutput = tf.nn.relu(finalOutput)

1.3 网络的迭代计算

指定损失函数,分类任务最后接一个softmax的时候比较特殊,用的是分类任务的交叉熵函数cross_entropy,labels代表真实值,logits代表预测值。

指定优化器最小化损失函数。

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y, logits = finalOutput))
opt = tf.train.GradientDescentOptimizer(learning_rate = .1).minimize(loss

计算精度

correct_prediction = tf.equal(tf.argmax(finalOutput,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

相较于之前的逻辑回归,多了层次。还设置了激活函数

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

for i in range(trainingIterations):
    batch = mnist.train.next_batch(batchSize)
    batchInput = batch[0]
    batchLabels = batch[1]
    _, trainingLoss = sess.run([opt, loss], feed_dict={X: batchInput, y: batchLabels})
    if i%1000 == 0:
        trainAccuracy = accuracy.eval(session=sess, feed_dict={X: batchInput, y: batchLabels})
        print ("step %d, training accuracy %g"%(i, trainAccuracy))

迭代

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

for i in range(trainingIterations):
    batch = mnist.train.next_batch(batchSize)
    batchInput = batch[0]
    batchLabels = batch[1]
    _, trainingLoss = sess.run([opt, loss], feed_dict={X: batchInput, y: batchLabels})
    if i%1000 == 0:
        trainAccuracy = accuracy.eval(session=sess, feed_dict={X: batchInput, y: batchLabels})
        print ("step %d, training accuracy %g"%(i, trainAccuracy))

step 0, training accuracy 0.13
step 1000, training accuracy 0.79
step 2000, training accuracy 0.83
step 3000, training accuracy 0.88
step 4000, training accuracy 0.91
step 5000, training accuracy 0.87
step 6000, training accuracy 0.89
step 7000, training accuracy 0.84
step 8000, training accuracy 0.89
step 9000, training accuracy 1

1.4 定义双层的神经网络

指定好矩阵相乘的矩阵shape


numHiddenUnitsLayer2 = 100
trainingIterations = 10000

X = tf.placeholder(tf.float32, shape = [None, inputSize])
y = tf.placeholder(tf.float32, shape = [None, numClasses])

W1 = tf.Variable(tf.random_normal([inputSize, numHiddenUnits], stddev=0.1))
B1 = tf.Variable(tf.constant(0.1), [numHiddenUnits])
W2 = tf.Variable(tf.random_normal([numHiddenUnits, numHiddenUnitsLayer2], stddev=0.1))
B2 = tf.Variable(tf.constant(0.1), [numHiddenUnitsLayer2])
W3 = tf.Variable(tf.random_normal([numHiddenUnitsLayer2, numClasses], stddev=0.1))
B3 = tf.Variable(tf.constant(0.1), [numClasses])

hiddenLayerOutput = tf.matmul(X, W1) + B1
hiddenLayerOutput = tf.nn.relu(hiddenLayerOutput)
hiddenLayer2Output = tf.matmul(hiddenLayerOutput, W2) + B2
hiddenLayer2Output = tf.nn.relu(hiddenLayer2Output)
finalOutput = tf.matmul(hiddenLayer2Output, W3) + B3

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y, logits = finalOutput))
opt = tf.train.GradientDescentOptimizer(learning_rate = .1).minimize(loss)

correct_prediction = tf.equal(tf.argmax(finalOutput,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

for i in range(trainingIterations):
    batch = mnist.train.next_batch(batchSize)
    batchInput = batch[0]
    batchLabels = batch[1]
    _, trainingLoss = sess.run([opt, loss], feed_dict={X: batchInput, y: batchLabels})
    if i%1000 == 0:
        train_accuracy = accuracy.eval(session=sess, feed_dict={X: batchInput, y: batchLabels})
        print ("step %d, training accuracy %g"%(i, train_accuracy))

testInputs = mnist.test.images
testLabels = mnist.test.labels
acc = accuracy.eval(session=sess, feed_dict = {X: testInputs, y: testLabels})
print("testing accuracy: {}".format(acc))

step 0, training accuracy 0.1
step 1000, training accuracy 0.97
step 2000, training accuracy 0.98
step 3000, training accuracy 1
step 4000, training accuracy 0.99
step 5000, training accuracy 1
step 6000, training accuracy 0.99
step 7000, training accuracy 1
step 8000, training accuracy 0.99
step 9000, training accuracy 1
testing accuracy: 0.9700999855995178

大概在前8层的神经网络层数越多,效果就会越好。但是后面效果就不行了

2.卷积神经网络

卷积层和池化层对原始图像进行提取

全链接层,将提取的特征组合,用组合在一起的特征实现分类。

卷积不会改变图像大小,池化会压缩图像

import tensorflow as tf
import random
import numpy as np
import matplotlib.pyplot as plt
import datetime
%matplotlib inline

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("data/", one_hot=True)

Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
《Tensorflow 从基础到实战》02 神经网络实现与CNN卷积神经网络的分类任务_第3张图片

2.1卷积

  • 分通道进行组合
  • 平面滑动卷积核

卷积神经网络的输入结构必须是一个立体的结构,一般是输入结构(28281)是正方形的效果比较好。

定义输入和label

tf.reset_default_graph() 
sess = tf.InteractiveSession()
x = tf.placeholder("float", shape = [None, 28,28,1]) #shape in CNNs is always None x height x width x color channels
y_ = tf.placeholder("float", shape = [None, 10]) #shape is always None x number of classes

定义卷积核
《Tensorflow 从基础到实战》02 神经网络实现与CNN卷积神经网络的分类任务_第4张图片

设定第一层卷积核【5,5,1,32】【5,5】代表卷积核的shape;1代表输入的通道数,1是输入图像是灰度图,3是彩色图,需要与输入的图像对应;32代表经过32次卷积步次得到的32个特征图个数。

卷积不断求内积,还需要加上一个偏置参数,一开始是随机的,shape和输出相同

W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 1, 32], stddev=0.1))#shape is filter x filter x input channels x output channels
b_conv1 = tf.Variable(tf.constant(.1, shape = [32])) #shape of the bias just has to match output channels of the filter

2.2 池化操作:找对应位置最大值。

《Tensorflow 从基础到实战》02 神经网络实现与CNN卷积神经网络的分类任务_第5张图片

执行卷积tf.nn.conv2d函数,input是输入,filter是卷积核,strides是步长,指定步长的时候需要指定h,w,c,bitchsize。一般情况下第一位的1表示bitchsize,最后一个1代表通道数。中间的两个1代表h和w的滑动长度,一般情况下图形是正方形,所以中间两个值的滑动距离是一样的,要么都是1,要么都是2;padding=“SAME”代表在卷积的时候,不够就补充,一般不改。

卷积求内积后还需要加上偏置项,在执行激活函数映射。
一次卷积就需要一次激活函数映射

指定池化层(max-pooling):指定kernelsize,一般情况下第一位的1表示bitchsize,最后一个1代表通道数。中间的两个值代表求最大值的区域。strides中间两个值为2,代表一次走两步。

h_conv1 = tf.nn.conv2d(input=x, filter=W_conv1, strides=[1, 1, 1, 1], padding='SAME') + b_conv1
h_conv1 = tf.nn.relu(h_conv1)
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

将其定义为函数

def conv2d(x, W):
    return tf.nn.conv2d(input=x, filter=W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def conv2d(x, W):
    return tf.nn.conv2d(input=x, filter=W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

2.3 第二层卷积层

#Second Conv and Pool Layers
W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1))
b_conv2 = tf.Variable(tf.constant(.1, shape = [64]))
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

2.4 定义全链接层

需要计算下在最后一个卷积后的输出的特征是什么样的

最后卷积得到【7764】,将其转换为1024维度的特征,赋值为w

借助reshape将【7764】的矩阵拉长【-1,7764】的一条

再设定全连接的神经网络层级别架构

dropout:让其杀死一部分的神经元.每次训练的时候杀死一部分,为了防治过拟合。,在全连接层的后面加上dropout,需要指定一个保留率
《Tensorflow 从基础到实战》02 神经网络实现与CNN卷积神经网络的分类任务_第6张图片

#Second Conv and Pool Layers
W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1))
b_conv2 = tf.Variable(tf.constant(.1, shape = [64]))
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

#First Fully Connected Layer
W_fc1 = tf.Variable(tf.truncated_normal([7 * 7 * 64, 1024], stddev=0.1))
b_fc1 = tf.Variable(tf.constant(.1, shape = [1024]))
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

#Dropout Layer
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

再构造第二层的全连接层
将1024的特征映射为10维,代表了10个类别

#Second Fully Connected Layer
W_fc2 = tf.Variable(tf.truncated_normal([1024, 10], stddev=0.1))
b_fc2 = tf.Variable(tf.constant(.1, shape = [10]))

#Final Layer
y = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

构造最后一层

#Final Layer
y = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

2.5 指定损失函数与自适应的优化器

优化器AdamOptimizer会自适应的调节学习率,而梯度下降要自己定义学习率,后面不变。所以AdamOptimizer更加常用。

crossEntropyLoss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = y))
trainStep = tf.train.AdamOptimizer().minimize(crossEntropyLoss)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

2.6 迭代训练

需要指定输入为【28281】

sess.run(tf.global_variables_initializer())

batchSize = 50
for i in range(1000):
    batch = mnist.train.next_batch(batchSize)
    trainingInputs = batch[0].reshape([batchSize,28,28,1])
    trainingLabels = batch[1]
    if i%100 == 0:
        trainAccuracy = accuracy.eval(session=sess, feed_dict={x:trainingInputs, y_: trainingLabels, keep_prob: 1.0})
        print ("step %d, training accuracy %g"%(i, trainAccuracy))
    trainStep.run(session=sess, feed_dict={x: trainingInputs, y_: trainingLabels, keep_prob: 0.5})

step 0, training accuracy 0.14
step 100, training accuracy 0.94
step 200, training accuracy 0.96
step 300, training accuracy 0.98
step 400, training accuracy 0.96
step 500, training accuracy 1
step 600, training accuracy 0.98
step 700, training accuracy 0.98
step 800, training accuracy 1
step 900, training accuracy 0.98

3. Cifar分类任务

读取数据

图像预处理

你可能感兴趣的:(Tensorflow-深度学习,神经网络,tensorflow,cnn)