TensorFlow2.0优点

TensorFlow2.0优点

第一点(GPU加速)

代码如下

import tensorflow as tf
import timeit

with tf.device('/cpu:0'):
    cpu_a = tf.random.normal([10000, 1000])
    cpu_b = tf.random.normal([1000, 2000])
    print(cpu_a.device, cpu_b.device)

with tf.device('/gpu:0'):
    gpu_a = tf.random.normal([10000, 1000])
    gpu_b = tf.random.normal([1000, 2000])
    print(gpu_a.device, gpu_b.device)


def cpu_run():
    with tf.device('/cpu:0'):
        c = tf.matmul(cpu_a, cpu_b)
    return c


def gpu_run():
    with tf.device('/gpu:0'):
        c = tf.matmul(gpu_a, gpu_b)
    return c


# warm up
cpu_time = timeit.timeit(cpu_run, number=10)
gpu_time = timeit.timeit(gpu_run, number=10)
print('warmup:', cpu_time, gpu_time)

cpu_time = timeit.timeit(cpu_run, number=10)
gpu_time = timeit.timeit(gpu_run, number=10)
print('run time:', cpu_time, gpu_time)

结果

warmup: 1.2548112000000002 0.21691210000000005
run time: 1.1538101999999997 0.0005310999999998955

warm up 先预热CPU和GPU,正式运行的时间,run time,CPU的没什么变化,而GPU运行时间大大减少。

第二点(自动求导)

代码如下

import tensorflow as tf

x = tf.constant(1.0)
a = tf.constant(2.0)
b = tf.constant(3.0)
c = tf.constant(4.0)

with tf.GradientTape() as tape:
    tape.watch([a, b, c])
    y = a ** 2 * x + b * x + c

[dy_da, dy_db, dy_dc] = tape.gradient(y, [a, b, c])
print(float(dy_da), float(dy_db), dy_dc)

运行结果

# 这是我的电脑GPU配置
2019-11-06 10:37:12.163356: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1149] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8792 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
# 这是运行结果
4.0 1.0 tf.Tensor(1.0, shape=(), dtype=float32)

第三点(神经网络API)

写下几个神经网络的API

tf.matmul
tf.nn.conv2d
tf.nn.relu
tf.nn.sigmoid
tf.nn.softmax
layers.Dense
layers.Conv2D
layers.SimpleRNN
layers.LSTM

你可能感兴趣的:(Tensorflow2.0学习)