TensorFlow实现逻辑回归分类器

TensorFlow是谷歌推出的深度学习框架,之前也看过官网的技术手册,但想着“纸上得来终觉浅”,于是决定自己编程实现一下,并把实现过程中遇到的问题记录下来,以便自己随时翻看。本文是关于逻辑回归分类器的实现,相关的机器学习理论比较简单,这里就不再赘述。

代码:

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
 
    
def train(batch_size=100, lr=0.5, iter_num=1000):
    mnist = input_data.read_data_sets('MNIST_data', one_hot=True) #类对象(train, validation, test)
    
    x = tf.placeholder(tf.float32, shape=(None, 28*28))
    y = tf.placeholder(tf.float32, shape=(None, 10))
    
    w=tf.Variable(tf.truncated_normal(shape=(28*28, 10), stddev=0.5), name='Weight')
    b=tf.Variable(tf.zeros(shape=[10]), name='Bias')
    variable_initiation = tf.initialize_all_variables()
    
    probability = tf.nn.softmax(tf.matmul(x, w) + b)
    Loss = -tf.reduce_mean(y*tf.log(probability))
    trainer= tf.train.GradientDescentOptimizer(learning_rate=lr).minimize(loss=Loss)
    
    sess = tf.Session()
    sess.run(variable_initiation)
    
    for iter in range(iter_num):
        train_dataset = mnist.train.next_batch(batch_size=batch_size)
        trainer.run(session=sess, feed_dict={x:train_dataset[0], y:train_dataset[1]})()
    '''评估过程开始'''
    comparision = tf.equal(tf.argmax(probability, dimension=1), tf.argmax(y, dimension=1))
    accuracy = tf.reduce_mean(tf.cast(comparision, dtype=tf.float32))
    accuracy = sess.run(fetches=accuracy, feed_dict={x:mnist.test.images, y:mnist.test.labels})
    print('The accuracy is %.3f' % accuracy)
    sess.close()
    
if __name__ == '__main__':
    train()

注意点:(1)获得变量的输出值有两种方法,一是使用符号变量的eval()方法返回结果,二是使用Session的run()方法进行返回;

(2)对于y=tensorflow.reduce_mean(x)方法,返回值y和输入参数x具有相同的数据类型,故若x表示{0, 1}的int32类型时,返回值为0;


你可能感兴趣的:(编程语言)