基于Tensorflow2.0和Transformer实现机器翻译代码详解

关于Transformer理论详解及源码实现,参考博主的《Transformer学习总结附TF2.0代码实现》文章,本博客以IWSLT 2016德英翻译为数据集,基于Tensorflow2.0,详细介绍Transformer实现机器翻译的过程。

详细代码GitHub链接:Machine-Translation-by-Transformer,内含tensorflow 1.13和tensorflow 2.0两个版本实现的过程。

代码的实现一共包含以下几个文件:

  • hyperparams.py : 该文件包含所有需要用到的参数
  • prepro.py: 该文件生成源语言和目标语言的词汇文件
  • data_load.py 该文件包含所有关于加载数据以及批量化数据的函数
  • transformer.py transformer实现的全过程,代码的核心
  • train.py 训练模型的代码

文章目录

    • hyperparams.py
    • prepro.py
    • data_load.py
    • transformer.py
            • 对位置进行编码
            • Scaled dot-product attention
            • Multi-Head Attention
            • LayerNormalization
            • point wise 前向网络
            • encoder过程
            • mask部分
            • decoder过程
            • Transformer过程
    • train.py

hyperparams.py

'''该文件包含所有需要用到的超参数'''
class HyperParams:
    # data
    source_train = './de-en/train.tags.de-en.de'
    target_train = './de-en/train.tags.de-en.en'
    source_test = 'de-en/IWSLT16.TED.tst2014.de-en.de.xml'
    target_test = 'de-en/IWSLT16.TED.tst2014.de-en.en.xml'
    ckpt_path = './ckpt'

    # training
    batch_size = 32
    EPOCHS = 20
    dff = 2048
    logdir = './logdir'

    # model
    max_seq_len = 20        # 句子的最长长度
    min_cnt = 20            # 单词出现次数太少的,用代替
    d_model = 512
    num_layers = 6
    num_heads = 8
    dropout_rate = 0.1

prepro.py

from __future__ import print_function
from hyperparams import HyperParams as hp
import codecs
import os
import regex
from collections import Counter


def make_vocab(fpath, fname):
    '''
    结构化数据
    :param fpath: 输入文件
    :param fname: 处理后的输出文件
    '''
    text = codecs.open(fpath, 'r', 'utf-8').read()
    text = regex.sub('[^\s\p{Latin}]', '', text)
    words = text.split()
    word2cnt = Counter(words)
    if not os.path.exists('./preprocessed'):
        os.mkdir('./preprocessed')
    with codecs.open('preprocessed/{}'.format(fname), 'w', 'utf-8') as fout:
        fout.write('{}\t1000000000\n{}\t1000000000\n{}\t1000000000\n{}\t1000000000\n'.
                   format('', '', '', ''))
        for word, cnt in word2cnt.most_common(len(word2cnt)):   # 按照单词出现的频率写入文件
            fout.write('{}\t{}\n'.format(word, cnt))


if __name__ == '__main__':
    make_vocab(hp.source_train, 'de.vocab.tsv')
    make_vocab(hp.source_test, 'en.vocab.tsv')
    print('Done')

data_load.py

首先建立一个函数,给德语的每个词分配一个id并返回两个字典,一个是根据词找id,一个是根据id找词。

from __future__ import print_function
from hyperparams import HyperParams as hp
import codecs
import numpy as np
import regex
import tensorflow as tf

def load_de_vocab():
    '''该函数的目的是给德语的每个词分配一个id并返回两个字典,一个是根据词找id,一个是根据id找词。'''
    vocab = [line.split()[0] for line in codecs.open('./preprocessed/de.vocab.tsv', 'r', 'utf-8').
        read().splitlines() if int(line.split()[1]) >= hp.min_cnt]
    word2idx = {word: idx for idx, word in enumerate(vocab)}
    idx2word = {idx: word for idx, word in enumerate(vocab)}
    return word2idx, idx2word

再建立一个函数,给英语的每个词分配一个id并返回两个字典,一个是根据词找id,一个是根据id找词。

def load_en_vocab():
    vocab = [line.split()[0] for line in codecs.open('./preprocessed/en.vocab.tsv', 'r', 'utf-8').
        read().splitlines() if int(line.split()[1]) >= hp.min_cnt]
    word2idx = {word: idx for idx, word in enumerate(vocab)}
    idx2word = {idx: word for idx, word in enumerate(vocab)}
    return word2idx, idx2word

建立一个函数,得到源语言和目标语言的句子列表,每个列表中的一个元素就是一个句子。

def create_data(source_sents, target_sents):
    de2idx, idx2de = load_de_vocab()
    en2idx, idx2en = load_en_vocab()
    # index
    x_list, y_list, sources, targets = [], [], [], []
    for source_sent, target_sent in zip(source_sents, target_sents):
        x = [de2idx.get(word, 1) for word in (source_sent + u'').split()]
        y = [en2idx.get(word, 1) for word in (target_sent + u'').split()]
        if max(len(x), len(y)) <= hp.max_seq_len:
            x_list.append(np.array(x))
            y_list.append(np.array(y))
            sources.append(source_sent)
            targets.append(target_sent)
    # pad
    X = np.zeros([len(x_list), hp.max_seq_len], np.int32)
    Y = np.zeros([len(y_list), hp.max_seq_len], np.int32)
    for i, (x, y) in enumerate(zip(x_list, y_list)):
        X[i] = np.lib.pad(x, [0, hp.max_seq_len - len(x)], 'constant', constant_values=(0, 0))
        Y[i] = np.lib.pad(y, [0, hp.max_seq_len - len(y)], 'constant', constant_values=(0, 0))
    return X, Y, sources, targets

首先利用之前定义的两个函数生成双语语言的word/id字典。
同时遍历这两个参数指示的句子列表。一次遍历一个句子对,在该次遍历中,给每个句子末尾后加一个文本结束符
用以表示句子末尾。加上该结束符的句子又被遍历每个词,同时利用双语word/id字典读取word对应的id加入一个新列表中,
若该word不再字典中则id用1代替(即UNK的id)。如此则生成概率两个用一串id表示的双语句子的列表。
然后判断这两个句子的长度是否都没超过设定的句子最大长度hp.maxlen,如果没超过,
则将这两个双语句子id列表加入模型要用的双语句子id列表x_list,y_list中,
同时将满足最大句子长度的原始句子(用word表示的)也加入到句子列表Sources以及Targets中。
函数后半部分为Pad操作。关于numpy中的pad操作可以参考numpy–prod和pad运算。这里说该函数的pad运算,
由于x和y都是一维的,所有只有前后两个方向可以pad,所以pad函数的第二个参数是一个含有两个元素的列表,
第一个元素为0说明给x或者y前面什么也不pad,即pad上0个数,第二个元素为hp.maxlen-len(x)以及hp.maxlen-len(x)
代表给x和y后面pad上x和y初始元素个数和句子最大长度差的那么多数值,至于pad成什么数值,
后面的constant_values给出了,即pad上去的id值为0,这也是我们词汇表中PAD的id。
经过pad的操作可以保证用id表示的句子列表都是等长的。
最终返回等长的句子id数组X,Y,以及原始句子李标Sources以及Targets。
X和Y的shape都为[len(x_list),hp.maxlen]。其中len(x_list)为句子的总个数,hp.maxlen为设定的最大句子长度。

接着加载训练数据

def load_train_data():
    de_sents = [regex.sub('[^\s\p{Latin}]', '', line) for line in codecs.
        open(hp.source_train, 'r', 'utf-8').read().split('\n') if line and line[0] != '<']
    # 句首为<是数据描述的行,并非真实数据的部分
    en_sents = [regex.sub('[^\s\p{Latin}]', '', line) for line in codecs.
        open(hp.target_train, 'r', 'utf-8').read().split('\n') if line and line[0] != '<']
    X, Y, _, _ = create_data(de_sents, en_sents)
    return X, Y

加载测试数据

def load_test_data():
    def _refine(line):
        line = regex.sub('<[^>]+>', '', line)
        line = regex.sub('[^\s\p{Latin}]', '', line)
        return line.strip()
    de_sents = [_refine(line) for line in codecs.open(hp.source_test, 'r', 'utf-8').
        read().split('\n') if line and line[:4] == ']
    en_sents = [_refine(line) for line in codecs.open(hp.target_test, 'r', 'utf-8').
        read().split('\n') if line and line[:4] == ']
    X, Y, sources, targets = create_data(de_sents, en_sents)
    return X, sources, targets

最后获得一个一个batch的数据

def get_batch_data():
    '''用于依次生产一个batch数据'''
    # Load data
    X, Y = load_train_data()
    # total batch count
    num_batch = len(X) // hp.batch_size
    # convert to tensor
    X = tf.convert_to_tensor(X, tf.int32)
    Y = tf.convert_to_tensor(Y, tf.int32)
    # 构建数据集,打散,批量,并丢掉最后一个不够batch_size的batch
    input_queues = tf.data.Dataset.from_tensor_slices((X, Y))
    input_queues = input_queues.shuffle(1000).batch(hp.batch_size, drop_remainder=True)
    return input_queues  #  shape=([batch_size, max_seq_len], [batch_size, max_seq_len])

transformer.py

该文件是transformer代码的核心,用于实现transformer的整个过程,具体理论讲解参照《Transformer学习总结附TF2.0代码实现》。

对位置进行编码
import tensorflow as tf
import numpy as np

def positional_encoding(pos, d_model):
    '''
    :param pos: 词在句子中的位置,句子上的维族;(i是d_model上的维度)
    :param d_model: 隐状态的维度,相当于num_units
    :return: 位置编码 shape=[1, position_num, d_model], 其中第一个维度是为了匹配batch_size
    '''
    def get_angles(position, i):
        # 这里的i相当于公式里面的2i或2i+1
        # 返回shape=[position_num, d_model]
        return position / np.power(10000., 2. * (i // 2.) / np.float(d_model))

    angle_rates = get_angles(np.arange(pos)[:, np.newaxis],
                             np.arange(d_model)[np.newaxis, :])
    # 2i位置使用sin编码,2i+1位置使用cos编码
    pe_sin = np.sin(angle_rates[:, 0::2])
    pe_cos = np.cos(angle_rates[:, 1::2])
    pos_encoding = np.concatenate([pe_sin, pe_cos], axis=-1)
    pos_encoding = tf.cast(pos_encoding[np.newaxis, ...], tf.float32)
    return pos_encoding
Scaled dot-product attention
def scaled_dot_product_attention(q, k, v, mask):
    '''attention(Q, K, V) = softmax(Q * K^T / sqrt(dk)) * V'''
    # query 和 Key相乘
    matmul_qk = tf.matmul(q, k, transpose_b=True)
    # 使用dk进行缩放
    dk = tf.cast(tf.shape(q)[-1], tf.float32)
    scaled_attention =matmul_qk / tf.math.sqrt(dk)
    # 掩码mask
    if mask is not None:
        # 这里将mask的token乘以-1e-9,这样与attention相加后,mask的位置经过softmax后就为0
        # padding位置 mask=1
        scaled_attention += mask * -1e-9
    # 通过softmax获取attention权重, mask部分softmax后为0
    attention_weights = tf.nn.softmax(scaled_attention)  # shape=[batch_size, seq_len_q, seq_len_k]
    # 乘以value
    outputs = tf.matmul(attention_weights, v)  # shape=[batch_size, seq_len_q, depth]
    return outputs, attention_weights
Multi-Head Attention

multi-head attention包含3部分: - 线性层与分头 - 缩放点积注意力 - 头连接 - 末尾线性层
每个多头注意块有三个输入; Q(查询),K(密钥),V(值)。 它们通过第一层线性层并分成多个头。
注意:点积注意力时需要使用mask, 多头输出需要使用tf.transpose调整各维度。
Q,K和V不是一个单独的注意头,而是分成多个头,因为它允许模型共同参与来自不同表征空间的不同信息。
在拆分之后,每个头部具有降低的维度,总计算成本与具有全维度的单个头部注意力相同。

class MultiHeadAttention(tf.keras.layers.Layer):
    def __init__(self, d_model, num_heads):
        super(MultiHeadAttention, self).__init__()
        self.num_heads = num_heads
        self.d_model = d_model
        # d_model必须可以正确分成多个头
        assert d_model % num_heads == 0
        # 分头之后维度
        self.depth = d_model // num_heads
        self.wq = tf.keras.layers.Dense(d_model)
        self.wk = tf.keras.layers.Dense(d_model)
        self.wv = tf.keras.layers.Dense(d_model)
        self.dense = tf.keras.layers.Dense(d_model)

    def split_heads(self, x, batch_size):
        # 分头,将头个数的维度,放到seq_len前面 x输入shape=[batch_size, seq_len, d_model]
        x = tf.reshape(x, [batch_size, -1, self.num_heads, self.depth])
        return tf.transpose(x, perm=[0, 2, 1, 3])

    def call(self, q, k, v, mask):
        batch_size = tf.shape(q)[0]
        # 分头前的前向网络,根据q,k,v的输入,计算Q, K, V语义
        q = self.wq(q)  # shape=[batch_size, seq_len_q, d_model]
        k = self.wq(k)
        v = self.wq(v)
        # 分头
        q = self.split_heads(q, batch_size)  # shape=[batch_size, num_heads, seq_len_q, depth]
        k = self.split_heads(k, batch_size)
        v = self.split_heads(v, batch_size)
        # 通过缩放点积注意力层
        # scaled_attention shape=[batch_size, num_heads, seq_len_q, depth]
        # attention_weights shape=[batch_size, num_heads, seq_len_q, seq_len_k]
        scaled_attention, attention_weights = scaled_dot_product_attention(q, k, v, mask)
        # 把多头维度后移
        scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # shape=[batch_size, seq_len_q, num_heads, depth]
        # 把多头合并
        concat_attention = tf.reshape(scaled_attention, (batch_size, -1, self.d_model)) # shape=[batch_size, seq_len_q, d_model]
        # 全连接重塑
        output = self.dense(concat_attention)
        return output, attention_weights
LayerNormalization
class LayerNormalization(tf.keras.layers.Layer):
    def __init__(self, epsilon=1e-8, **kwargs):
        super(LayerNormalization, self).__init__(**kwargs)
        self.epsilon = epsilon
    def build(self, input_shape):
        self.gamma = self.add_weight(name='gamma',
                                     shape=input_shape[-1:],
                                     initializer=tf.ones_initializer(),
                                     trainable=True)
        self.beta = self.add_weight(name='beta',
                                    shape=input_shape[-1:],
                                    initializer=tf.zeros_initializer(),
                                    trainable=True)
        super(LayerNormalization, self).build(input_shape)
    def call(self, x): # x shape=[batch_size, seq_len, d_model]
        mean = tf.keras.backend.mean(x, axis=-1, keepdims=True)
        std = tf.keras.backend.std(x, axis=-1, keepdims=True)
        return self.gamma * (x - mean) / (std + self.epsilon) + self.beta
point wise 前向网络
def point_wise_feed_forward(d_model, diff):
    return tf.keras.Sequential([
        tf.keras.layers.Dense(diff, activation=tf.nn.relu),
        tf.keras.layers.Dense(d_model)
    ])
encoder过程

encoder layer:
每个编码层包含以下子层 - Multi-head attention(带掩码) - Point wise feed forward networks
每个子层中都有残差连接,并最后通过一个正则化层。残差连接有助于避免深度网络中的梯度消失问题。
每个子层输出是LayerNorm(x + Sublayer(x)),规范化是在d_model维的向量上。Transformer一共有n个编码层。

class EncoderLayer(tf.keras.layers.Layer):
    def __init__(self, d_model, num_heads, dff, dropout_rate=0.1):
        super(EncoderLayer, self).__init__()
        self.mha = MultiHeadAttention(d_model, num_heads)
        self.ffn = point_wise_feed_forward(d_model, dff)
        self.layernorm1 = LayerNormalization()
        self.layernorm2 = LayerNormalization()
        self.dropout1 = tf.keras.layers.Dropout(dropout_rate)
        self.dropout2 = tf.keras.layers.Dropout(dropout_rate)
    def call(self, inputs, training, mask):
        # multi head attention (encoder时Q = K = V)
        att_output, _ = self.mha(inputs, inputs, inputs, mask)
        att_output = self.dropout1(att_output, training=training)
        output1 = self.layernorm1(inputs + att_output)  # shape=[batch_size, seq_len, d_model]
        # feed forward network
        ffn_output = self.ffn(output1)
        ffn_output = self.dropout2(ffn_output, training=training)
        output2 = self.layernorm2(output1 + ffn_output)  # shape=[batch_size, seq_len, d_model]
        return output2

class Encoder(tf.keras.layers.Layer):
    def __init__(self, d_model, num_layers, num_heads, dff,
                 input_vocab_size, max_seq_len, dropout_rate=0.1):
        super(Encoder, self).__init__()
        self.num_layers = num_layers
        self.d_model = d_model
        self.emb = tf.keras.layers.Embedding(input_vocab_size, d_model)  # shape=[batch_size, seq_len, d_model]
        self.pos_encoding = positional_encoding(max_seq_len, d_model)  # shape=[1, max_seq_len, d_model]
        self.encoder_layer = [EncoderLayer(d_model, num_heads, dff, dropout_rate)
                              for _ in range(num_layers)]
        self.dropout = tf.keras.layers.Dropout(dropout_rate)
    def call(self, inputs, training, mask):
        # 输入部分;inputs shape=[batch_size, seq_len]
        seq_len = inputs.shape[1]  # 句子真实长度
        word_embedding = self.emb(inputs)  # shape=[batch_size, seq_len, d_model]
        word_embedding *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
        emb= word_embedding + self.pos_encoding[:, :seq_len, :]
        x = self.dropout(emb, training=training)
        for i in range(self.num_layers):
            x = self.encoder_layer[i](x, training, mask)
        return x  # shape=[batch_size, seq_len, d_model]
mask部分
# padding mask
def create_padding_mask(seq):
    '''为了避免输入中padding的token对句子语义的影响,需要将padding位mask掉,
    原来为0的padding项的mask输出为1; encoder和decoder过程都会用到'''
    seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
    # 扩充维度以便于使用attention矩阵;seq输入shape=[batch_size, seq_len];输出shape=[batch_siz, 1, 1, seq_len]
    return seq[:, np.newaxis, np.newaxis, :]

# look-ahead mask
def create_look_ahead_mask(size):
    '''用于对未预测的token进行掩码 这意味着要预测第三个单词,只会使用第一个和第二个单词。
    要预测第四个单词,仅使用第一个,第二个和第三个单词,依此类推。只有decoder过程用到'''
    # 产生一个上三角矩阵,上三角的值全为0。把这个矩阵作用在每一个序列上,就可以达到我们的目的。
    mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
    return mask  # shape=[seq_len, seq_len]

def create_mask(inputs, targets):
    # 编码器只有padding_mask
    encoder_padding_mask = create_padding_mask(inputs)
    # 解码器decoder_padding_mask,用于第二层multi-head attention
    decoder_padding_mask = create_padding_mask(inputs)
    # seq_mask mask掉未预测的词
    seq_mask = create_look_ahead_mask(tf.shape(targets)[1])
    # decoder_targets_padding_mask 解码层的输入padding mask
    decoder_targets_padding_mask = create_padding_mask(targets)
    # 合并解码层mask,用于第一层masked multi-head attention
    look_ahead_mask = tf.maximum(decoder_targets_padding_mask, seq_mask)
    return encoder_padding_mask, look_ahead_mask, decoder_padding_mask
decoder过程

decoder layer:
每个编码层包含以下子层: - Masked muti-head attention(带padding掩码和look-ahead掩码

  • Muti-head attention(带padding掩码)value和key来自encoder输出,
    query来自Masked muti-head attention层输出 - Point wise feed forward network
    每个子层中都有残差连接,并最后通过一个正则化层。残差连接有助于避免深度网络中的梯度消失问题。
    每个子层输出是LayerNorm(x + Sublayer(x)),规范化是在d_model维的向量上。Transformer一共有n个解码层。
    当Q从解码器的第一个注意块接收输出,并且K接收编码器输出时,注意权重表示基于编码器输出给予解码器输入的重要性。
    换句话说,解码器通过查看编码器输出并自我关注其自己的输出来预测下一个字。
    ps:因为padding在后面所以look-ahead掩码同时掩padding
class DecoderLayer(tf.keras.layers.Layer):
    def __init__(self, d_model, num_heads, dff, dropout_rate=0.1):
        super(DecoderLayer, self).__init__()
        self.mha1 = MultiHeadAttention(d_model, num_heads)
        self.mha2 = MultiHeadAttention(d_model, num_heads)
        self.ffn = point_wise_feed_forward(d_model, dff)
        self.layernorm1 = LayerNormalization()
        self.layernorm2 = LayerNormalization()
        self.layernorm3 = LayerNormalization()
        self.dropout1 = tf.keras.layers.Dropout(dropout_rate)
        self.dropout2 = tf.keras.layers.Dropout(dropout_rate)
        self.dropout3 = tf.keras.layers.Dropout(dropout_rate)
    def call(self, inputs, encoder_out, training, look_ahead_mask, padding_mask):
        # masked multi-head attention: Q = K = V
        att_out1, att_weight1 = self.mha1(inputs, inputs, inputs, look_ahead_mask)
        att_out1 = self.dropout1(att_out1, training=training)
        att_out1 = self.layernorm1(inputs + att_out1)
        # multi-head attention: Q=att_out1, K = V = encoder_out
        att_out2, att_weight2 = self.mha2(att_out1, encoder_out, encoder_out, padding_mask)
        att_out2 = self.dropout2(att_out2, training=training)
        att_out2 = self.layernorm2(att_out1 + att_out2)
        # feed forward network
        ffn_out = self.ffn(att_out2)
        ffn_out = self.dropout3(ffn_out, training=training)
        output = self.layernorm3(att_out2 + ffn_out)
        return output, att_weight1, att_weight2

class Decoder(tf.keras.layers.Layer):
    def __init__(self, d_model, num_layers, num_heads, dff,
                 target_vocab_size, max_seq_len, dropout_rate=0.1):
        super(Decoder, self).__init__()
        self.seq_len = tf.shape
        self.d_model = d_model
        self.num_layers = num_layers
        self.word_embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
        self.pos_encoding = positional_encoding(max_seq_len, d_model)
        self.decoder_layers = [DecoderLayer(d_model, num_heads, dff, dropout_rate)
                               for _ in range(num_layers)]
        self.dropout = tf.keras.layers.Dropout(dropout_rate)
    def call(self, inputs, encoder_out, training, look_ahead_mask, padding_mask):
        seq_len = inputs.shape[1]
        attention_weights = {}
        word_embedding = self.word_embedding(inputs)
        word_embedding *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
        emb = word_embedding + self.pos_encoding[:, :seq_len, :]
        x = self.dropout(emb, training=training)
        for i in range(self.num_layers):
            x, att1, att2 = self.decoder_layers[i](x, encoder_out, training,
                                                   look_ahead_mask, padding_mask)
            attention_weights['decoder_layer{}_att_w1'.format(i+1)] = att1
            attention_weights['decoder_layer{}_att_w2'.format(i + 1)] = att2
        return x, attention_weights
Transformer过程

Transformer包含编码器、解码器和最后的线性层,解码层的输出经过线性层后得到Transformer的输出

class Transformer(tf.keras.Model):
    def __init__(self, d_model, num_layers, num_heads, dff,
                 input_vocab_size, target_vocab_size, max_seq_len, dropout_rate=0.1):
        super(Transformer, self).__init__()
        self.encoder = Encoder(d_model, num_layers, num_heads, dff, input_vocab_size, max_seq_len, dropout_rate)
        self.decoder = Decoder(d_model, num_layers, num_heads, dff, target_vocab_size, max_seq_len, dropout_rate)
        self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    def call(self, inputs, targets, training, encoder_padding_mask,
             look_ahead_mask, decoder_padding_mask):
        # 首先encoder过程,输出shape=[batch_size, seq_len_input, d_model]
        encoder_output = self.encoder(inputs, training, encoder_padding_mask)
        # 再进行decoder, 输出shape=[batch_size, seq_len_target, d_model]
        decoder_output, att_weights = self.decoder(targets, encoder_output, training,
                                                   look_ahead_mask, decoder_padding_mask)
        # 最后映射到输出层
        final_out = self.final_layer(decoder_output) # shape=[batch_size, seq_len_target, target_vocab_size]
        return final_out, att_weights

以上就是整个transformer的过程!

train.py

最后构建模型的训练过程
第一步,定义学习率和优化器

import tensorflow as tf
from hyperparams import HyperParams as hp
from data_load import get_batch_data, load_de_vocab, load_en_vocab
from transformer import *
import time

'''学习率衰减计算:lrate = d_model^-0.5 * min(step_num^-0.5, step_num*warmup_steps^-1.5)'''
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
    def __init__(self, d_model, warmup_steps=4000):
        super(CustomSchedule, self).__init__()
        self.d_model = tf.cast(d_model, tf.float32)
        self.warmup_steps = warmup_steps
    def __call__(self, step):
        arg1 = tf.math.rsqrt(step)
        arg2 = step * (self.warmup_steps ** -1.5)
        return tf.math.rsqrt(0.5) * tf.math.minimum(arg1, arg2)

# 学习率
lr = CustomSchedule(hp.d_model)
# 优化器
optimizer = tf.keras.optimizers.Adam(lr, beta_1=0.9, beta_2=0.98, epsilon=1e-8)

第二步,损失和准确率计算

'''由于目标序列是填充的,因此在计算损耗时应用填充掩码很重要。 padding的掩码为0,没padding的掩码为1'''
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
def loss_fun(y_true, y_pred):
    mask = tf.math.logical_not(tf.math.equal(y_true, 0))
    loss_ = loss_object(y_true, y_pred)
    mask = tf.cast(mask, dtype=loss_.dtype)
    loss_ *= mask
    return tf.reduce_mean(loss_)
# 用于记录损失和准确率
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_acc = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

第三步,构造模型,创建checkpoint管理器

de2index, index2de = load_de_vocab()
en2index, index2en = load_en_vocab()
input_vocab_size = len(de2index)
target_vocab_size = len(en2index)

transformer = Transformer(hp.d_model,
                          hp.num_layers,
                          hp.num_heads,
                          hp.dff,
                          input_vocab_size,
                          target_vocab_size,
                          hp.max_seq_len,
                          hp.dropout_rate)

# 创建checkpoint管理器
ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt,
                                          hp.ckpt_path,
                                          max_to_keep=3)
if ckpt_manager.latest_checkpoint:
    ckpt.restore(ckpt_manager.latest_checkpoint)
    print('Load last checkpoint restore')

第四步,设置训练过程
target分为target_input和target real. target_input是传给解码器的输入,target_real是其左移一个位置的结果,
每个target_input位置对应下一个预测的标签
如句子=“SOS A丛林中的狮子正在睡觉EOS”
target_input =“SOS丛林中的狮子正在睡觉”
target_real =“丛林中的狮子正在睡觉EOS”
transformer是个自动回归模型:它一次预测一个部分,并使用其到目前为止的输出,决定下一步做什么。
在训练期间使用teacher-forcing,即无论模型当前输出什么都强制将正确输出传给下一步。
而预测时则根据前一个的输出预测下一个词
为防止模型在预期输出处达到峰值,模型使用look-ahead mask

@tf.function
def train_step(inputs, targets):
    tar_inp = targets[:, :-1]
    tar_real = targets[:, 1:]
    # 构造mask
    encoder_padding_mask, look_ahead_mask, decoder_padding_mask = create_mask(inputs, tar_inp)

    with tf.GradientTape() as tape:
        pred, _ = transformer(inputs,
                              tar_inp,
                              True,
                              encoder_padding_mask,
                              look_ahead_mask,
                              decoder_padding_mask)
        loss = loss_fun(tar_real, pred)
        # 求梯度
        gradients = tape.gradient(loss, transformer.trainable_variables)
        # 反向传播
        optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
        # 记录loss和acc
        train_loss(loss)
        train_acc(tar_real, pred)

最后,训练模型

for epoch in range(hp.EPOCHS):
    start_time = time.time()
    # 重置
    train_loss.reset_states()
    train_acc.reset_states()
    for step, (inputs, targets) in enumerate(get_batch_data()):
        print(inputs)
        train_step(inputs, targets)
        if step % 10 == 0:
            print(' epoch{},step:{}, loss:{:.4f}, acc:{:.4f}'.format(
                epoch, step, train_loss.result(), train_acc.result()
            ))
    if epoch % 2 == 0:
        ckpt_save_path = ckpt_manager.save()
        print('epoch{}, save model at {}'.format(epoch, ckpt_save_path))
    print('epoch:{}, loss:{:.4f}, acc:{:.4f}'.format(epoch, train_loss.result(), train_acc.result()))
    print('time in one epoch:{}'.format(time.time() - start_time))

详细代码GitHub链接:https://github.com/yuenoble/Machine-Translation-by-Transformer

你可能感兴趣的:(自然语言处理)