本来想根据机器学习书上的推导的梯度下降公式手写一个梯度下降,但是效果不好,后面用了tf自带的梯度下降后成功。
发现方差作为代价函数不能在MNIST数据集上取得良好效果,训练正确率始终在0.1上下徘徊。
MNIST上的优化函数得使用交叉熵,隐含节点个数有几个经验公式,大概取了一个附近的值,下面先给出MNIST上的代码。
import numpy
import input_data
import tensorflow as tf
mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)
input_size = 784
out_size = 10
hide_note = 60
x = tf.placeholder("float", [None, input_size])
y = tf.placeholder("float", [None, out_size])
v = tf.Variable(tf.random_normal([input_size, hide_note], stddev=0.1))
b = tf.Variable(tf.zeros([hide_note])+0.1)
w = tf.Variable(tf.random_normal([hide_note, out_size], stddev=0.1))
sita = tf.Variable(tf.zeros([out_size])+0.1)
a = tf.matmul(x,v)+b
a = tf.nn.relu(a)
y_ = tf.matmul(a, w)+sita
y_ = tf.nn.relu(y_)
loss = tf.reduce_mean (tf.nn.softmax_cross_entropy_with_logits (labels = y, logits = y_))
opt = tf.train.GradientDescentOptimizer (0.05).minimize (loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(5000):
batch_xs, batch_ys = mnist.train.next_batch(100)
test,new_loss=sess.run([opt,loss],feed_dict={x: batch_xs, y: batch_ys})
accuracy = 0
for i in range(1000):
batch_xs, batch_ys = mnist.test.next_batch(1)
indexs = sess.run(y_, feed_dict={x: batch_xs})
answer=numpy.argmax(indexs)
# print(indexs)
print("test:", i, "预测类别:", answer, "真实类别:", numpy.argmax(batch_ys))
if answer==numpy.argmax(batch_ys):
accuracy=accuracy+1
print(accuracy/1000)
注意初始化的时候不要全为0,节点的输出结果用激活函数映射一下,学习率是0.05,一般推荐学习率不大于0.1,在调试过程中学习到了一些tensorflow矩阵运算函数的使用和注意点,这里需要提一下的就是tf中向量(一维数组)是不能直接和矩阵进行乘运算的可以用reshape或者先点乘再累加和的方法。
机器学习 5.5课后习题,代价函数用方差,注意下离散值的转换,这里转换成向量,即对于某离散属性若有三个可取值则转化为一个三维单位向量,在值的那一维取1其余为0,因为西瓜集的例子不多所以我这里手动处理了一下,最后一列是label。
0.697 0.460 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0.774 0.376 0 1 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0.634 0.264 0 1 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0.608 0.318 1 0 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0.556 0.215 0 0 1 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0.403 0.237 1 0 0 0 0 1 1 0 0 1 0 0 0 1 0 0 1 1 0.481 0.149 0 1 0 0 0 1 1 0 0 0 1 0 0 1 0 0 1 1 0.437 0.211 0 1 0 0 0 1 1 0 0 1 0 0 0 1 0 1 0 1 0.666 0.091 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0.243 0.267 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 1 0 0.245 0.057 0 0 1 0 1 0 0 0 1 0 0 1 0 0 1 1 0 0 0.343 0.099 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0.639 0.161 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 0.657 0.198 0 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 0 0.360 0.370 0 1 0 0 0 1 1 0 0 1 0 0 0 1 0 0 1 0 0.593 0.042 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0.719 0.103 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 1 0 0
import numpy
import tensorflow as tf
test_data = numpy.loadtxt("test_data.txt")
train_data=test_data[:, 0:19]
train_label=test_data[:, 19:]
input_size = 19
out_size = 1
hide_note = 8
x = tf.placeholder("float", [None, input_size])
y = tf.placeholder("float", [None, out_size])
v = tf.Variable(tf.random_normal([input_size, hide_note], stddev=0.1))
b = tf.Variable(tf.zeros([hide_note])+0.1)
w = tf.Variable(tf.random_normal([hide_note, out_size], stddev=0.1))
sita = tf.Variable(tf.zeros([out_size])+0.1)
a = tf.matmul(x,v)+b
a = tf.nn.relu(a)
y_ = tf.matmul(a, w)+sita
y_ = tf.nn.relu(y_)
loss = 0.5*tf.reduce_sum((y_-y)*(y_-y), [0,1])
opt = tf.train.GradientDescentOptimizer (0.05).minimize (loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(1000): # 迭代1000次
test,new_loss=sess.run([opt,loss],feed_dict={x: train_data, y: train_label})
indexs = sess.run(y_, feed_dict={x: train_data})
print("test:", i, "预测类别:", indexs, "真实类别:", train_label)
最后结果可以自己对照一下,在训练集上准确率达到100%,这个是上面那个程序改的,在改的过程中,确实学到了很多,后面将会研究一下自己写的梯度下降错在那里,下次手动实现在西瓜集上的梯度下降。