tensorflow03

tensorflow基本操作

tensorflow03_第1张图片

张量合并与分割

tensorflow03_第2张图片

tensorflow03_第3张图片

tensorflow03_第4张图片

切割

tensorflow03_第5张图片

tensorflow03_第6张图片

数据统计

 tensorflow03_第7张图片

tensorflow03_第8张图片

tensorflow03_第9张图片

tensorflow03_第10张图片

数据排序

tensorflow03_第11张图片

tensorflow03_第12张图片

tensorflow03_第13张图片

tensorflow03_第14张图片

tensorflow03_第15张图片

tensorflow03_第16张图片

tensorflow03_第17张图片

填充与复制

tensorflow03_第18张图片

tensorflow03_第19张图片

tensorflow03_第20张图片 

张量限幅

tensorflow03_第21张图片 

import  tensorflow as tf
from    tensorflow import keras
from    tensorflow.keras import datasets, layers, optimizers
import  os

os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
print(tf.__version__)

(x, y), _ = datasets.mnist.load_data()
x = tf.convert_to_tensor(x, dtype=tf.float32) / 50.
y = tf.convert_to_tensor(y)
y = tf.one_hot(y, depth=10)
print('x:', x.shape, 'y:', y.shape)
train_db = tf.data.Dataset.from_tensor_slices((x,y)).batch(128).repeat(30)
x,y = next(iter(train_db))
print('sample:', x.shape, y.shape)
# print(x[0], y[0])



def main():

    # 784 => 512
    w1, b1 = tf.Variable(tf.random.truncated_normal([784, 512], stddev=0.1)), tf.Variable(tf.zeros([512]))
    # 512 => 256
    w2, b2 = tf.Variable(tf.random.truncated_normal([512, 256], stddev=0.1)), tf.Variable(tf.zeros([256]))
    # 256 => 10
    w3, b3 = tf.Variable(tf.random.truncated_normal([256, 10], stddev=0.1)), tf.Variable(tf.zeros([10]))



    optimizer = optimizers.SGD(lr=0.01)


    for step, (x,y) in enumerate(train_db):

        # [b, 28, 28] => [b, 784]
        x = tf.reshape(x, (-1, 784))

        with tf.GradientTape() as tape:

            # layer1.
            h1 = x @ w1 + b1
            h1 = tf.nn.relu(h1)
            # layer2
            h2 = h1 @ w2 + b2
            h2 = tf.nn.relu(h2)
            # output
            out = h2 @ w3 + b3
            # out = tf.nn.relu(out)

            # compute loss
            # [b, 10] - [b, 10]
            loss = tf.square(y-out)
            # [b, 10] => [b]
            loss = tf.reduce_mean(loss, axis=1)
            # [b] => scalar
            loss = tf.reduce_mean(loss)



        # compute gradient
        grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3])
        # print('==before==')
        # for g in grads:
        #     print(tf.norm(g))
        
        grads,  _ = tf.clip_by_global_norm(grads, 15)

        # print('==after==')
        # for g in grads:
        #     print(tf.norm(g))
        # update w' = w - lr*grad
        optimizer.apply_gradients(zip(grads, [w1, b1, w2, b2, w3, b3]))



        if step % 100 == 0:
            print(step, 'loss:', float(loss))




if __name__ == '__main__':
    main()

高阶操作

tensorflow03_第22张图片 

神经网路与全连接层

数据集加载

tensorflow03_第23张图片 

 tensorflow03_第24张图片

tensorflow03_第25张图片 

全连接层

 tensorflow03_第26张图片

tensorflow03_第27张图片 

tensorflow03_第28张图片

输出方式

tensorflow03_第29张图片 

误差计算

tensorflow03_第30张图片 

 

你可能感兴趣的:(人工智能,python,深度学习)