keras 开发文档 5:自定义fit()中发生的事情(Customizing what happens in fit())

文章目录

    • 介绍
    • 设置
    • 第一个简单的例子
    • 进入lower-level
    • 支持sample_weight和class_weight
    • 提供您自己的评估步骤
    • 总结:端到端GAN示例
    • 参考引用

介绍

当您进行监督学习时,可以使用fit(),一切都会顺利进行。

当您需要从头开始编写自己的训练循环时,可以使用GradientTape并控制每个小细节。

但是,如果您需要自定义训练算法,又想从fit()的便捷功能中受益,例如回调,内置分发支持或分步融合,该怎么办?

Keras的核心原则是逐步揭示复杂性。您应该始终能够逐步进入较低级别的工作流程。如果高级功能与您的用例不完全匹配,那么您不应掉下去。您应该能够更好地控制小细节,同时保留相当数量的高级便利。

当需要自定义fit()的功能时,应覆盖Model类的训练步骤功能。这是fit()为每一批数据调用的函数。然后,您将能够像往常一样调用fit(),它将运行您自己的学习算法。

请注意,此模式不会阻止您使用Functional API构建模型。无论是构建顺序模型,功能API模型还是子类模型,都可以执行此操作。

让我们看看它是如何工作的。

设置

需要TensorFlow 2.2或更高版本。

import tensorflow as tf
from tensorflow import keras

第一个简单的例子

让我们从一个简单的示例开始:

  • 我们创建一个新类来继承keras.Model。

  • 我们只是重写方法train_step(self,data)。
    -我们返回一个将度量标准名称(包括损失)映射到其当前值的字典。
    输入的参数数据是传递来适合训练数据的数据:

  • 如果通过调用fit(x,y,…)传递Numpy数组,则数据将成为元组(x,y)

  • 如果通过调用fit(dataset,…)传递tf.data.Dataset,则数据将是每个批次的数据集产生的数据。
    在train_step方法的主体中,我们实现了定期的培训更新,类似于您已经熟悉的内容。 重要的是,我们通过self.compiled_loss来计算损失,该损失包装了传递给compile()的损失函数。

同样,我们调用self.compiled_metrics.update_state(y,y_pred)来更新在compile()中传递的度量的状态,并在最后从self.metrics查询结果以检索其当前值。

class CustomModel(keras.Model):
    def train_step(self, data):
        # Unpack the data. Its structure depends on your model and
        # on what you pass to `fit()`.
        x, y = data

        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)  # Forward pass
            # Compute the loss value
            # (the loss function is configured in `compile()`)
            loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)

        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)
        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))
        # Update metrics (includes the metric that tracks the loss)
        self.compiled_metrics.update_state(y, y_pred)
        # Return a dict mapping metric names to current value
        return {m.name: m.result() for m in self.metrics}

让我们尝试一下:

import numpy as np

# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])

# Just use `fit` as usual
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=3)

keras 开发文档 5:自定义fit()中发生的事情(Customizing what happens in fit())_第1张图片

进入lower-level

自然地,您可以跳过在compile()中传递损失函数,而是在train_step中手动完成所有操作。 指标也是如此。 这是一个较低层的示例,仅使用compile()来配置优化器:

mae_metric = keras.metrics.MeanAbsoluteError(name="mae")
loss_tracker = keras.metrics.Mean(name="loss")


class CustomModel(keras.Model):
    def train_step(self, data):
        x, y = data

        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)  # Forward pass
            # Compute our own loss
            loss = keras.losses.mean_squared_error(y, y_pred)

        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)

        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))

        # Compute our own metrics
        loss_tracker.update_state(loss)
        mae_metric.update_state(y, y_pred)
        return {"loss": loss_tracker.result(), "mae": mae_metric.result()}


# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)

# We don't passs a loss or metrics here.
model.compile(optimizer="adam")

# Just use `fit` as usual -- you can use callbacks, etc.
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=1)

在这里插入图片描述
请注意,使用此设置后,您需要在每个时期之后或在训练和评估之间手动调用指标的reset_states()。

支持sample_weight和class_weight

您可能已经注意到,我们的第一个基本示例没有提及样本加权。 如果要支持fit()参数sample_weight和class_weight,则只需执行以下操作:

  • 从data参数解压缩sample_weight
    -将其传递给Compiled_loss和compiled_metrics(当然,如果您不依赖compile()来获取损失和指标,也可以手动应用它)
  • 这就是了。 那就是清单。
class CustomModel(keras.Model):
    def train_step(self, data):
        # Unpack the data. Its structure depends on your model and
        # on what you pass to `fit()`.
        if len(data) == 3:
            x, y, sample_weight = data
        else:
            x, y = data

        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)  # Forward pass
            # Compute the loss value.
            # The loss function is configured in `compile()`.
            loss = self.compiled_loss(
                y,
                y_pred,
                sample_weight=sample_weight,
                regularization_losses=self.losses,
            )

        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)

        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))

        # Update the metrics.
        # Metrics are configured in `compile()`.
        self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight)

        # Return a dict mapping metric names to current value.
        # Note that it will include the loss (tracked in self.metrics).
        return {m.name: m.result() for m in self.metrics}


# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])

# You can now use sample_weight argument
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
sw = np.random.random((1000, 1))
model.fit(x, y, sample_weight=sw, epochs=3)

keras 开发文档 5:自定义fit()中发生的事情(Customizing what happens in fit())_第2张图片

提供您自己的评估步骤

如果要对model.evaluate()进行相同的操作怎么办? 然后,您将以完全相同的方式覆盖test_step。 看起来是这样的:

class CustomModel(keras.Model):
    def test_step(self, data):
        # Unpack the data
        x, y = data
        # Compute predictions
        y_pred = self(x, training=False)
        # Updates the metrics tracking the loss
        self.compiled_loss(y, y_pred, regularization_losses=self.losses)
        # Update the metrics.
        self.compiled_metrics.update_state(y, y_pred)
        # Return a dict mapping metric names to current value.
        # Note that it will include the loss (tracked in self.metrics).
        return {m.name: m.result() for m in self.metrics}


# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(loss="mse", metrics=["mae"])

# Evaluate with our custom test_step
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.evaluate(x, y)

在这里插入图片描述

总结:端到端GAN示例

让我们来看一个利用您刚刚学到的一切的端到端示例。

让我们考虑一下:

  • 生成器网络旨在生成28x28x1图像。
  • 鉴别器网络旨在将28x28x1图像分为两类(“假”和“真实”)。
  • 每个优化器一个。
  • 损失函数训练鉴别器。
from tensorflow.keras import layers

# Create the discriminator
discriminator = keras.Sequential(
    [
        keras.Input(shape=(28, 28, 1)),
        layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
        layers.LeakyReLU(alpha=0.2),
        layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
        layers.LeakyReLU(alpha=0.2),
        layers.GlobalMaxPooling2D(),
        layers.Dense(1),
    ],
    name="discriminator",
)

# Create the generator
latent_dim = 128
generator = keras.Sequential(
    [
        keras.Input(shape=(latent_dim,)),
        # We want to generate 128 coefficients to reshape into a 7x7x128 map
        layers.Dense(7 * 7 * 128),
        layers.LeakyReLU(alpha=0.2),
        layers.Reshape((7, 7, 128)),
        layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
        layers.LeakyReLU(alpha=0.2),
        layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
        layers.LeakyReLU(alpha=0.2),
        layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
    ],
    name="generator",
)

这是功能齐全的GAN类,它覆盖compile()以使用其自己的签名,并在train_step的17行中实现了整个GAN算法:

class GAN(keras.Model):
    def __init__(self, discriminator, generator, latent_dim):
        super(GAN, self).__init__()
        self.discriminator = discriminator
        self.generator = generator
        self.latent_dim = latent_dim

    def compile(self, d_optimizer, g_optimizer, loss_fn):
        super(GAN, self).compile()
        self.d_optimizer = d_optimizer
        self.g_optimizer = g_optimizer
        self.loss_fn = loss_fn

    def train_step(self, real_images):
        if isinstance(real_images, tuple):
            real_images = real_images[0]
        # Sample random points in the latent space
        batch_size = tf.shape(real_images)[0]
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))

        # Decode them to fake images
        generated_images = self.generator(random_latent_vectors)

        # Combine them with real images
        combined_images = tf.concat([generated_images, real_images], axis=0)

        # Assemble labels discriminating real from fake images
        labels = tf.concat(
            [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0
        )
        # Add random noise to the labels - important trick!
        labels += 0.05 * tf.random.uniform(tf.shape(labels))

        # Train the discriminator
        with tf.GradientTape() as tape:
            predictions = self.discriminator(combined_images)
            d_loss = self.loss_fn(labels, predictions)
        grads = tape.gradient(d_loss, self.discriminator.trainable_weights)
        self.d_optimizer.apply_gradients(
            zip(grads, self.discriminator.trainable_weights)
        )

        # Sample random points in the latent space
        random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))

        # Assemble labels that say "all real images"
        misleading_labels = tf.zeros((batch_size, 1))

        # Train the generator (note that we should *not* update the weights
        # of the discriminator)!
        with tf.GradientTape() as tape:
            predictions = self.discriminator(self.generator(random_latent_vectors))
            g_loss = self.loss_fn(misleading_labels, predictions)
        grads = tape.gradient(g_loss, self.generator.trainable_weights)
        self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))
        return {"d_loss": d_loss, "g_loss": g_loss}

让我们试驾一下:

# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)

gan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim)
gan.compile(
    d_optimizer=keras.optimizers.Adam(learning_rate=0.0003),
    g_optimizer=keras.optimizers.Adam(learning_rate=0.0003),
    loss_fn=keras.losses.BinaryCrossentropy(from_logits=True),
)

# To limit execution time, we only train on 100 batches. You can train on
# the entire dataset. You will need about 20 epochs to get nice results.
gan.fit(dataset.take(100), epochs=1)

在这里插入图片描述
深度学习背后的想法很简单,那么为什么实施它们会很痛苦?

参考引用

  • Customizing what happens in fit()

你可能感兴趣的:(keras,keras,开发文档,TF2.0,FIT)