skip-gram的tensorflow实现

word2vec模型有两种形式,skip-gram和cbow。skip-gram根据中心词(target)预测上下文(context),而cbow根据上下文(context)预测中心词(target)。
本文主要介绍使用tensorflow实现基于负采样(negative sampling)的skip-gram模型。主要代码如下,

def build_model(BATCH_SIZE, VOCAB_SIZE, EMBED_SIZE, NUM_SAMPLED, NUM_TRUE=1):
    '''
    Build the model (i.e. computational graph) and return the placeholders (input and output) and the loss 
    '''
    with tf.name_scope('data'):
        target_node = tf.placeholder(tf.int32, shape=[BATCH_SIZE], name='target_node')
        context_node = tf.placeholder(tf.int32, shape=[BATCH_SIZE, NUM_TRUE], name='context_node')
        negative_samples = (tf.placeholder(tf.int32, shape=[NUM_SAMPLED], name='negative_samples'),
            tf.placeholder(tf.float32, shape=[BATCH_SIZE, NUM_TRUE], name='true_expected_count'),
            tf.placeholder(tf.float32, shape=[NUM_SAMPLED], name='sampled_expected_count'))
    with tf.name_scope('target_embedding_matrix'):
        target_embed_matrix = tf.Variable(tf.random_uniform([VOCAB_SIZE, EMBED_SIZE], -1.0, 1.0), 
                            name='target_embed_matrix')
    # define the inference
    with tf.name_scope('loss'):
        target_embed = tf.nn.embedding_lookup(target_embed_matrix, target_node, name='embed')
        # nce_weight: context_embed
        nce_weight = tf.Variable(tf.truncated_normal([VOCAB_SIZE, EMBED_SIZE],
                                                    stddev=1.0 / (EMBED_SIZE ** 0.5)), 
                                                    name='nce_weight')
        nce_bias = tf.Variable(tf.zeros([VOCAB_SIZE]), name='nce_bias')
        loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weight, 
                                            biases=nce_bias, 
                                            labels=context_node, 
                                            inputs=target_embed,
                                            sampled_values = negative_samples, 
                                            num_sampled=NUM_SAMPLED, 
                                            num_classes=VOCAB_SIZE), name='loss')

        loss_summary = tf.summary.scalar("loss_summary", loss)

    return target_node, context_node, negative_samples, loss

tf.nn.nce_loss

tf.nn.nce_loss用来计算和返回NCE loss(noise-contrastive estimation training loss),我们借助其实现skip-gram模型中的负采样。
$$p(w_O|w_I)=\log\sigma({v^{'}}_{w_O}^\top v_{w_I})+\sum_{i=1}^k \mathbb{E}_{w_i\sim P_n(w)}[\log \sigma(-{v^{'}}_{w_i}^\top v_{w_I})]$$

def nce_loss(weights,
             biases,
             labels,
             inputs,
             num_sampled,
             num_classes,
             num_true=1,
             sampled_values=None,
             remove_accidental_hits=False,
             partition_strategy="mod",
             name="nce_loss"):
  logits, labels = _compute_sampled_logits(
      weights=weights,
      biases=biases,
      labels=labels,
      inputs=inputs,
      num_sampled=num_sampled,
      num_classes=num_classes,
      num_true=num_true,
      sampled_values=sampled_values,
      subtract_log_q=True,
      remove_accidental_hits=remove_accidental_hits,
      partition_strategy=partition_strategy,
      name=name)
  sampled_losses = sigmoid_cross_entropy_with_logits(
      labels=labels, logits=logits, name="sampled_losses")
  return _sum_rows(sampled_losses)

其中,weights为一个tensor(shape为[num_classes, dim])或者由tensor组成的list(list进行concatenate操作后得到的shape为[num_classes, dim]),num_classes为类别总数,在word2vec中对应词汇表中单词的总个数,dim为embedding的维度,weights对应公式中的${v^{'}}_w$,为context embeddings;

biases是一个shape为[num_classes]的tensor,对应偏置;

labels是shape为[batch_size, num_true]的tensor,num_true为每一个训练样本中正样本(context)的个数,默认为1,在word2vec中,固定为1,labels表示target word对应的context word在词汇表中的index;

inputs是[batch_size, dim]的tensor,对应公式中的$v_w$,表示target embeddings;

num_sampled,int类型,表示每一个训练样本(即每一个batch)中负采样的个数;

sampled_values为3元组(sampled_candidates,true_expected_count,sampled_expected_count),如果sampled_values=None,则默认使用tf.nn.log_uniform_canidate_sampler返回的3元组。

candidator_sampler

def log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique,
                                  range_max, seed=None, name=None):
  seed1, seed2 = random_seed.get_seed(seed)
  return gen_candidate_sampling_ops.log_uniform_candidate_sampler(
      true_classes, num_true, num_sampled, unique, range_max, seed=seed1,
      seed2=seed2, name=name)

其中,true_classes的shape为[batch_size, num_true],在word2vec中即为context words在词汇表中的序号,num_true=1;

num_sampled,int类型,表示随机采样的负样本个数;

unique,bool类型,每一个batch是否采样不放回采样(为true则一个batch中的所有类别不相同);

range_max,int类型,表示类别总数,对应word2vec的词汇表中单词总个数;

返回的3元组(sampled_candidates,true_expected_count,sampled_expected_count)中,sampled_candidates的shape为[num_sampled],表示num_sampled个负采样的index(在word2vec中即为单词在词汇表中的序号),true_expected_count的shape与true_classes相同,表示每一个正样本在采样分布下的预期计数,sampled_expected_count的shape与sampled_candidates相同,表示每一个负采样样本在采样分布下的预期计数。

需要注意的是,如果使用默认的tf.nn.log_uniform_candidate_sampler进行采样,词汇表(vocabulary)中的单词应该是按出现的频率从高到低排列(出现频率高的单词对应weightsinputs中靠前的embedding),这是因为默认的概率分布为
$$P(class_i) = \frac{(\log(class_i + 2) - \log(class_i + 1))}{ \log(range\_max + 1)}$$

案例
context = tf.placeholder(tf.int64, [5, 1], name="true_classes")
#   If `unique=True`, then these are post-rejection probabilities and we
#   compute them approximately.
(sampled_candidates, true_expected_count, sampled_expected_count) = tf.nn.log_uniform_candidate_sampler(
    true_classes=context,
    num_true=1,
    num_sampled=4,
    unique=False,
    range_max=10,
    seed=1234
)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    t1, t2, t3 = sess.run([sampled_candidates,true_expected_count,sampled_expected_count],
                          feed_dict={context: [[0], [0], [1], [2], [1]]})
    print(t1)
    print(t2)
    print(t3)
    
output:
[5 1 1 0]
[[ 1.1562593 ]
 [ 1.1562593 ]
 [ 0.67636836]
 [ 0.47989097]
 [ 0.67636836]]
[ 0.25714332  0.67636836  0.67636836  1.1562593 ]

_compute_sampled_logits

在tf.nn.nce_loss的内部调用了_compute_sampled_logits,返回值logits的shape为[batch_size, num_true + num_sampled],即[batch_size, 1 + num_sampled],每一个batch中的值分别对应公式中的${v^{'}}_{w_O}^\top v_{w_I}$和${v^{'}}_{w_i}^\top v_{w_I}$,
返回值labels的shape与logits相同,每一个batch为num_true个1和num_sampled个0。
对于返回值logits和labels调用sigmoid_cross_entropy_with_logits计算loss。

sigmoid_cross_entropy_with_logits

令x=logits,z=labels,使用sigmoid_cross_entropy_with_logits计算得到的loss为
$$loss=z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))$$
刚好就是公式中的$-p(w_O|w_I)$,因此最大化$p(w_O|w_I)$,只需最小化loss。

参考

  1. https://www.cnblogs.com/xiaoj...
  2. https://github.com/apple2373/...

你可能感兴趣的:(tensorflow)