基于github的bert源码解读
bert github链接: https://github.com/google-research/bert/tree/master
windows流程运行改编版源码及数据百度网盘链接:链接:https://pan.baidu.com/s/1APk9EIh_wuU41fMHSMz3Pg?pwd=s2k7
提取码:s2k7
python create_pretraining_data.py
--input_file=./sample_text.txt \
--output_file=../GLUE/chineseoutput/tf_examples.tfrecord \
--vocab_file=../GLUE/BERT_BASE_DIR/uncased_L-12_H-768_A-12/vocab.txt \
--do_lower_case=True \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--masked_lm_prob=0.15 \
--random_seed=12345 \
--dupe_factor=5
输入文本格式为,每行一个句子,文档之间使用空行进行分割
line = line.strip() 空行执行到此为空,会创建一个新的文档list
all_documents.append([])
with tf.gfile.GFile(input_file, "r") as reader:
while True:
line = tokenization.convert_to_unicode(reader.readline())
if not line:
break
# 文档之间使用空行标识,
line = line.strip()
# Empty lines are used as document delimiters
if not line:
# 空行新建[],用于存放新文档的每个句子
all_documents.append([])
tokens = tokenizer.tokenize(line)
if tokens:
all_documents[-1].append(tokens)
max_num_tokens = max_seq_length - 3
target_seq_length = max_num_tokens
if rng.random() < short_seq_prob:
target_seq_length = rng.randint(2, max_num_tokens)
mask 机制
for index in index_set:
covered_indexes.add(index)
masked_token = None
# 80% of the time, replace with [MASK]
if rng.random() < 0.8:
masked_token = "[MASK]"
else:
# 10% of the time, keep original
if rng.random() < 0.5:
masked_token = tokens[index]
# 10% of the time, replace with random word
else:
masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)]
output_tokens[index] = masked_token
模型训练脚本:
python run_pretraining.py \
--input_file=../GLUE/chineseoutput/tf_examples.tfrecord \
--output_dir=../GLUE/pretraining_output \
--do_train=True \
--do_eval=True \
--bert_config_file=../GLUE/BERT_BASE_DIR/uncased_L-12_H-768_A-12/bert_config.json \
--init_checkpoint=../GLUE/BERT_BASE_DIR/uncased_L-12_H-768_A-12/bert_model.ckpt \
--train_batch_size=32 \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--num_train_steps=20 \
--num_warmup_steps=10 \
--learning_rate=2e-5
name = input_ids, shape = (32, 128)
name = input_mask, shape = (32, 128)
name = masked_lm_ids, shape = (32, 20)
name = masked_lm_positions, shape = (32, 20)
name = masked_lm_weights, shape = (32, 20)
name = next_sentence_labels, shape = (32, 1)
name = segment_ids, shape = (32, 128)
embedding 层
embedding_lookup
① word embedding
输出 embedding_table 维度shape=[30522, 768]
输出 embedding_output shape=[32,128,768]
② embedding_postprocessor
位置 embedding和token_type_ids embedding
token_type_ids embedding 词典为 shape =[2,768]
position embedding 词典大小为 shape=[512,768]
embedding 层输出为:word embedding + token_type_ids embedding + position embedding
shape=[32,128,768]
transformer 层
输入为 embedding层的结果: shape=[32,128,768]
输出为经过transformer 层的结果: shape=[32,128,768]
pooled层
取transformer输出结果中,句子维度第一个数,即[CLS]所在的位置。
然后再过一层全连接层,输出 shape=[32,768]
first_token_tensor = tf.squeeze(self.sequence_output[:, 0:1, :], axis=1)
self.pooled_output = tf.layers.dense(
first_token_tensor,
config.hidden_size,
activation=tf.tanh,
kernel_initializer=create_initializer(config.initializer_range))
get_masked_lm_output
① 提取mask位置的数据
name = masked_lm_ids, shape = (32, 20)
输入为transform最后一层数据 shape=[32,128,768]
根据mask所在位置索引,提取mask后的数据,shape=[32,20,768],将数据展平,shape=[640,768]
② 将上面数据shape=[640,768] 再经过一层全连接层,和layer normlization层,输出shape=[640,768]
③ 将上面层与字典table embedding 进行计算,计算,shape=[640,30522],然后再过一个softmax,即可得到,在字典表中的概率。
get_next_sentence_output
输入为CLS对应的数据,即上面pooled层输出,shape =[32,768]
经过全连接层后输出shape =[2,768]
然后经过softmax,得到分类概率 shape =[2,768]
per_example_loss = -tf.reduce_sum(log_probs * one_hot_labels, axis=[-1])
numerator = tf.reduce_sum(label_weights * per_example_loss)
log_probs = tf.nn.log_softmax(logits, axis=-1)
labels = tf.reshape(labels, [-1])
one_hot_labels = tf.one_hot(labels, depth=2, dtype=tf.float32)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
total_loss = masked_lm_loss + next_sentence_loss
未完待续。。。。。。