2018-04-01建造神经网络

目标是拟合出像是x^2=0.5的曲线
大概的结构是这样的


image.png
import tensorflow as tf
import numpy as np


def add_layer(inputs,in_size,out_size,activation_function=None):
    Weights = tf.Variable(tf.random_normal([in_size,out_size]))#row * col
    biases=tf.Variable(tf.zeros([1,out_size])+0.1)
    Wx_plus_b=tf.matmul(inputs,Weights)+biases
    if activation_function is None:
        outputs=Wx_plus_b
    else :
        outputs=activation_function(Wx_plus_b)
    return outputs

x_data=np.linspace(-1,1,300,dtype=np.float32)[:,np.newaxis]
noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)
y_data=np.square(x_data)-0.5 + noise

xs=tf.placeholder(tf.float32,[None,1])
ys=tf.placeholder(tf.float32,[None,1])

l1=add_layer(xs,1,10,activation_function=tf.nn.relu)
predition=add_layer(l1,10,1,activation_function=None)

loss =tf.reduce_mean( tf.reduce_sum( tf.square(ys-predition),reduction_indices=[1]))

train_step=tf.train.GradientDescentOptimizer(0.1).minimize(loss)

init = tf.global_variables_initializer()#important

sess=tf.Session()
sess.run(init)

for i in range(1000):
    sess.run(train_step,feed_dict={xs:x_data,ys:y_data})
    if i%50==0:
        print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))

#print (x_data)

loss =tf.reduce_mean( tf.reduce_sum( tf.square(y_data-predition),reduction_indices=[1]))

简单的说reduce_sum就是降维求和,其中reduction indices指的是对行还是对列,0就是纵向,1就是横向,而默认就是全部。

sess.run(train_step,feed_dict={xs:x_data,ys:y_data})

这里feed_dict是将真实数据传入xs和ys这两个占位符

你可能感兴趣的:(2018-04-01建造神经网络)