g1 = tf.random.Generator.from_seed(1)
print(g1.normal(shape=[2, 3]))
g2 = tf.random.get_global_generator()
print(g2.normal(shape=[2, 3]))
执行此代码会报错:module ‘tensorflow_core._api.v2.random’ has no attribute ‘Generator’
实际上应该修改为:
g1 = tf.random.experimental.Generator.from_seed(1)
print(g1.normal(shape=[2, 3]))
g2 = tf.random.experimental.get_global_generator()
print(g2.normal(shape=[2, 3]))
原因:Generator 是在 random.experimental 下的,而不是 random 下。相关 API:https://tensorflow.google.cn/api_docs/python/tf/random/experimental/Generator
2020-04-27 tensorflow 官网有一处与我使用 tensorflow2.1 不同,记录一下:
在 https://tensorflow.google.cn/guide/concrete_function#interoperability_with_tfkeras 这一章节的下面代码在我本地运行错误:
module = tf.Module()
module.linear = linear
module.variables
运行后报错:TypeError: ‘<’ not supported between instances of ‘InputLayer’ and ‘Sequential’。即无法将 InputLayer 转换为 Sequential。
实际要修改为:
module = tf.Module()
module = linear
module.variables
运行后与官网结果一样。
2020-04-28 tensorflow 官网有一处与我使用 tensorflow2.1 不同,记录一下:
在 https://tensorflow.google.cn/guide/distributed_training#examples_and_tutorials_3 下的这一段代码:
@tf.function
def train_step(dist_inputs):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(features, training=True)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
return cross_entropy
per_example_losses = mirrored_strategy.run(step_fn, args=(dist_inputs,))
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.MEAN, per_example_losses, axis=0)
return mean_loss
with mirrored_strategy.scope():
for inputs in dist_dataset:
print(train_step(inputs))
在我本地运行之后报错:AttributeError: ‘MirroredStrategy’ object has no attribute ‘run’.
肯定又是 API 变了……查看 API 文档:https://tensorflow.google.cn/api_docs/python/tf/distribute/MirroredStrategy#experimental_run_v2,将 run 修改为 experimental_run_v2,即:
@tf.function
def train_step(dist_inputs):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
# training=True 只在训练与推理(例如,退出)之间存在具有不同行为的层时才需要。
logits = model(features, training=True)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
return cross_entropy
per_example_losses = mirrored_strategy.experimental_run_v2(step_fn, args=(dist_inputs,))
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.MEAN, per_example_losses, axis=0)
return mean_loss
运行后与 官网输出结果一致。