上篇文章对word2vector做了简单的描述,本篇主要是进行基础实践,
基本翻译来自:https://adventuresinmachinelearning.com/word2vec-tutorial-tensorflow/
加上我的一些理解。
基本步骤为:
1、构建词袋,词袋的意思是你所要处理的所有词的一个set,并用唯一标识来标识某一个词,并统计每个词出现的次数。这里可以用简单的index,也可以用hash等方法。其实这也可以理解为为所有词建立一个loop-up table。
```
def build_dataset(words, n_words):
"""Process raw inputs into a dataset."""
count = [['UNK', -1]] count.extend(collections.Counter(words).most_common(n_words - 1)) dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
```
2、构建batch,作为神经网络的输入使用。batch中包括输入的词和label,这个label是随机给定的,作为该gram的需要预测的给定的label。比如,对于这句话“the cat sat on the mat”,如果gram是3个,那么就是 the cat sat, cat sat on ....,如果是5个,就为 the cat sat on the,输出词为sat,将要预测的上下文就是从剩下的 [‘the’, ‘cat’, ‘on’, ‘the’]中随机选择。上下文窗口的意思是在输入词的周围选择几个词。
关于输入词和label,如果你的场景下,对于给定的输入词,给定了具体的label,那么直接进行构建即可。如在推荐场景下的dssm和youtube dnn模型中,输入词即为用户观看过得一系列视频,label为用户下一次观看的视频。
```
data_index = 0
# generate batch data
def generate_batch(data, batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
context = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window input_word skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # input word at the center of the buffer
targets_to_avoid = [skip_window]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window] # this is the input word
context[i * num_skips + j, 0] = buffer[target] # these are the context words
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
# Backtrack a little bit to avoid skipping words in the end of a batch
data_index = (data_index + len(data) - span) % len(data)
return batch, context
```
3、构建好输入后,即使用tensorflow来进行word2vector的训练。
注意:因为中间这一隐层是一个全连接层,所以会出现embeddings 的维度和weights 的维度相同的情况。
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a context.
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Look up embeddings for inputs.
embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
# vocabulary_size就是输入词袋的大小,如10000,使用的是one-hot的表达
#embedding_size就是将要embeding的维度。
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# tf.nn.embedding_lookup就是对这些embedding的词查找其所在的index。
#为输出层构建变量
# Construct the variables for the softmax
weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size)))
biases = tf.Variable(tf.zeros([vocabulary_size]))
hidden_out = tf.matmul(embed, tf.transpose(weights)) + biases
#最后使用交叉熵来训练模型变量,注意需要将输入先转为one-hot的形式。
# convert train_context to a one-hot format
train_one_hot = tf.one_hot(train_context, vocabulary_size)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=hidden_out, labels=train_one_hot))
# Construct the SGD optimizer using a learning rate of 1.0.
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(cross_entropy)
注意模型训练完之后,当然要进行验证,可以使用cosine夹角来验证训练的效果
# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
4、然而,使用以上的训练方法很慢,原因在于在最后一层softmax中,需要计算所有可能词的出现的概率,即10000。所以采用NCE(noise contrastive estimation),该方法仅随机挑选2-20个可能的词来进行概率预测。
所以在实际应用中,可以直接调用tensor的函数
```
# Construct the variables for the NCE loss
nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
nce_loss = tf.reduce_mean(
tf.nn.nce_loss(weights=nce_weights,
biases=nce_biases,
labels=train_context,
inputs=embed,
num_sampled=num_sampled,
num_classes=vocabulary_size))
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(nce_loss)
```