CoLab设置使用GPU和TPU

##tf2.4.0
from tensorflow.python.keras.callbacks import EarlyStopping
from tensorflow.python.keras.layers import Embedding, SpatialDropout1D, LSTM, Dense
from tensorflow.python.keras.models import Sequential

import tensorflow as tf
import os

##下面6行为GPU设置,若用GPU,则用这6行,其他的就不用了了
# gpus = tf.config.list_physical_devices("GPU")
# print(gpus)
# if gpus:
#     gpu0 = gpus[0]  # 如果有多个GPU,仅使用第0个GPU
#     tf.config.experimental.set_memory_growth(gpu0, True)  # 设置GPU显存用量按需使用
#     tf.config.set_visible_devices([gpu0], "GPU")

#加载数据
with open('w2v1000.pkl', 'rb') as f:
    dict = pickle.load(f)
X = dict['X']
Y = dict['Y']

#TPU设置代码,以下全为TPU的设置,需要注释掉上面GPU设置的6行,所有代码原封不动照搬即可,只需要
#在模型编译print语句后把自己模型给替换即可。
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
# This is the TPU initialization code that has to be at the beginning.
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
#strategy = tf.distribute.experimental.TPUStrategy(resolver)#这句好像已经改成下面了
strategy = tf.distribute.TPUStrategy(resolver)

with strategy.scope():#模型编译
  print("# Preparing model")#就是这条语句后替换自己的模型即可,其他不用动
  model = Sequential()#我写的示例为LSTM模型
  model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2, input_shape=(1600, 150),      return_sequences=True))
  model.add(Dense(32))
  #model.add(Dense(2, activation='softmax'))
  model.add(Dense(1, activation='sigmoid'))
  model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
  print("  ", model.summary())

#模型训练,这条语句就按正常写即可,与TPU无关了
model.fit(X, Y, epochs=10,  batch_size=10, validation_split=0.2,workers=4,
    use_multiprocessing=True,callbacks=[EarlyStopping(monitor='loss', patience=7, min_delta=0.0001)])

#模型保存
model.save('w2vmodel1000.h5')

以下是Colab测试结果:

CoLab设置使用GPU和TPU_第1张图片 

 可以看到设备类型为TPU,速度十分快,同样模型用GPU(Tesla T4)一个Epoch需要800多秒,快了20到30倍,这还是bitchsize设置的比较小,所以TPU yyds!

你可能感兴趣的:(深度学习,Colab,TPU,tpu,深度学习,nlp)