作者 | 小宋是呢
转载自CSDN博客
【导读】TensorFlow 2.0,昨天凌晨,正式放出了2.0版本。
不少网友表示,TensorFlow 2.0比PyTorch更好用,已经准备全面转向这个新升级的深度学习框架了。
windows推荐地址:https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-4.7.10-Windows-x86_64.exe
ubuntu推荐地址:https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-4.7.10-Linux-x86_64.sh
Mac os推荐地址:https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-4.7.10-MacOSX-x86_64.pkg
查看conda环境:conda env list
新建conda环境(env_name就是创建的环境名,可以自定义):conda create -n env_name
激活conda环境(ubuntu与Macos 将conda 替换为source):conda activate env_name
退出conda环境:conda deactivate
安装和卸载python包:conda install numpy # conda uninstall numpy
查看已安装python列表:conda list -n env_name
conda create -n TF_2C python=3.6
conda activate TF_2C
pip install tensorflow==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
import tensorflow as tf
version = tf.__version__
gpu_ok = tf.test.is_gpu_available()
print("tf version:",version,"\nuse GPU",gpu_ok)
tf version: 2.0.0
use GPU False
conda create -n TF_2G python=3.6
conda activate TF_2G
conda install cudatoolkit=10.0 cudnn
pip install tensorflow-gpu==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
import tensorflow as tf
version = tf.__version__
gpu_ok = tf.test.is_gpu_available()
print("tf version:",version,"\nuse GPU",gpu_ok)
tf version: 2.0.0
use GPU True
import tensorflow as tf
X = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
y = tf.constant([[10.0], [20.0]])
class Linear(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense = tf.keras.layers.Dense(
units=1,
activation=None,
kernel_initializer=tf.zeros_initializer(),
bias_initializer=tf.zeros_initializer()
)
def call(self, input):
output = self.dense(input)
return output
# 以下代码结构与前节类似
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
for i in range(100):
with tf.GradientTape() as tape:
y_pred = model(X) # 调用模型 y_pred = model(X) 而不是显式写出 y_pred = a * X + b
loss = tf.reduce_mean(tf.square(y_pred - y))
grads = tape.gradient(loss, model.variables) # 使用 model.variables 这一属性直接获得模型中的所有变量
optimizer.apply_gradients(grads_and_vars=zip(grads, model.variables))
if i % 10 == 0:
print(i, loss.numpy())
print(model.variables)
输出结果如下:
10 0.73648137
20 0.6172349
30 0.5172956
40 0.4335389
50 0.36334264
60 0.3045124
70 0.25520816
80 0.2138865
90 0.17925593
[, ]
◆
精彩推荐
◆