Udacity Deep Learning课程作业(五)

作业五是根据Text8的语料库训练一个语言模型word2vec,得到语料库中每个词的嵌入式表达(向量)。

Mikolov提出的word2vec包括skip-gram和CBOW两种模型,前者是根据给定词预测其周围的词,后者是根据周围的词预测中间的词。Mikolov最大的贡献是采用negative sampling的方法极大提升神经网络模型的计算效率。官方的作业ipynb给出了skip-gram的实现,要求我们给出CBOW实现。

语料库

语料库使用Text8(http://mattmahoney.net/dc/tex8.zip),以下代码可以下载语料,但有可能因为网络原因没下完整,建议用浏览器下载完整。

import os
from six.moves.urllib.request import urlretrieve

url = 'http://mattmahoney.net/dc/'

def maybe_download(filename, expected_bytes):
    """Download a file if not present, and make sure it's the right size."""
    if not os.path.exists(filename):
        filename, _ = urlretrieve(url + filename, filename)
    statinfo = os.stat(filename)
    if statinfo.st_size == expected_bytes:
        print('Found and verified %s' % filename)
    else:
        print(statinfo.st_size)
        raise Exception(
          'Failed to verify ' + filename + '. Can you get to it with a browser?')
    return filename

filename = maybe_download('text8.zip', 31344016)

注释:

下载完Text8.zip解压缩,读出数据并分词,大概1700w个重复词。设置字典大小vocabulary_size为5w,即只有5w个unique词,统计词频构建字典:

  • collections.Counter(words).most_common(vocabulary_size-1) 这个函数好强大,直接返回给定words列表的出现频数Top-K统计结果(词,词频),因为考虑要未知词UNKnown word(UNK),留了一个位置,K设置为字典大小减一。
  • 根据统计词频count,构建字典dictionary,word作为key,word在count中的次序index作为value
  • 遍历一遍词袋words,对词袋中的每个词,依次记录其在字典的索引index,得到data
  • 最后的zip操作(构建tuple)是互换原来字典的key和value,重新构建一个字典reverse_dictionary,key是index,value是对应的词,方便根据索引找到词
vocabulary_size = 50000

def build_dataset(words):
    count = [['UNK', -1]] # count的第一项
    count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
    dictionary = dict()
    for word, _ in count:
        dictionary[word] = len(dictionary)
        # key为word,value表示word在count中的次序index
    data = list()
    unk_count = 0
    for word in words:
        if word in dictionary: 
            index = dictionary[word] 
        else:
            index = 0  # dictionary['UNK']
        unk_count = unk_count + 1
        data.append(index) 
        # data记录words每个词在字典中的index;不在字典的视为UNK,index用0
    count[0][1] = unk_count # 用最新统计的unk词频取代原来('unk', -1)的-1
    reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) 
    # 互换原来字典的key和value,重新构建字典,方便根据index找到word
    return data, count, dictionary, reverse_dictionary

data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words  # Hint to reduce memory.

skip-gram

Udacity Deep Learning课程作业(五)_第1张图片

可以先学习下skip-gram的基于TensorFlow的实现源码,关键在于构建训练数据&labels,以及使用tf.nn.embedding_lookup。为了区分原来注释,本人添加的注释为中文。

data_index = 0

def generate_batch(batch_size, num_skips, skip_window):
    """
    batch_size, 每次训练数据的batch大小,是num_skips的整数倍,这里用8
    num_skips, 目标词左和右的总词数,最大设为2倍skip_window
    skip_window, 目标词左(右)的词数,计算整个窗口span=2*skip_window+1
    """
    global data_index
    assert batch_size % num_skips == 0
    assert num_skips <= 2 * skip_window 
    # 初始化
    batch = np.ndarray(shape=(batch_size), dtype=np.int32)
    labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
    span = 2 * skip_window + 1 # [ skip_window target skip_window ]
    buffer = collections.deque(maxlen=span)
    for _ in range(span): # 初始化一个span大小的buffer
        buffer.append(data[data_index]) 
        data_index = (data_index + 1) % len(data) # 处理词序列边界,保证data_index在data范围
    for i in range(batch_size // num_skips):
        target = skip_window  # target label at the center of the buffer
        targets_to_avoid = [ skip_window ] # skip_window刚好是一个span中的中间词下标
        for j in range(num_skips): # 遍历目标词的上下文,作为labels
            while target in targets_to_avoid:
                target = random.randint(0, span - 1) # 选上下文词:随机挑选span中除了target的一个词
            targets_to_avoid.append(target) # 放入已选中列表,避免重复选
            # batch中的词和labels一一对应,相同的target对应多个上下文词label
            batch[i * num_skips + j] = buffer[skip_window] 
            labels[i * num_skips + j, 0] = buffer[target] 
        # 更新buffer,滑动窗口往后移动一个词,bufer是deque类型,追加一个词会弹出前面一个词
        buffer.append(data[data_index]) 
        data_index = (data_index + 1) % len(data)
    return batch, labels

print('data:', [reverse_dictionary[di] for di in data[:8]]) # 根据data索引di返回对应的词

# 测试不同的(num_skips, skip_window),产生的batch和labels
for num_skips, skip_window in [(2, 1), (4, 2)]:
    data_index = 0
    batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window) 
    print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
    print('    batch:', [reverse_dictionary[bi] for bi in batch])
    print('    labels:', [reverse_dictionary[li] for li in labels.reshape(8)])

注释:

skip-grams模型是给定中心词target预测其上下文词。在batch训练数据中,skip_window=1,则取target词的左右各1个词作为输出label,一个target对应多个label,因此看到batch出现2次”originated”,对应label分别为它的上下文”anarchism”和”as”,同理”as”, “a”, “term”亦如此。

skip_window=2时,取target词左右各2个词作为输出label,”as”对应前面2个词”anarchism”、”originated”和后面2个词”a”、”term”,因此batch出现4个相同的as,对应4个不同的label。

因此word2vec不用人工标注数据,依赖上下文数据,某次样本的输入同时也作为前后两次样本的输出,间接实现了监督学习。

batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent. 
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.

graph = tf.Graph()

with graph.as_default(), tf.device('/cpu:0'):

    # Input data.
    train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
    train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
    valid_dataset = tf.constant(valid_examples, dtype=tf.int32)

    # Variables.
    embeddings = tf.Variable(
        tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) # embeddings:词库大小*向量维度
    softmax_weights = tf.Variable(
        tf.truncated_normal([vocabulary_size, embedding_size],
                         stddev=1.0 / math.sqrt(embedding_size))) # softmax的weights:词库大小*向量维度
    softmax_biases = tf.Variable(tf.zeros([vocabulary_size])) # softmax的biases:词库大小 

    # Model.
    # Look up embeddings for inputs.
    embed = tf.nn.embedding_lookup(embeddings, train_dataset) # 按照train_dataset样本的ids顺序返回embeddings中的第ids行
    # Compute the softmax loss, using a sample of the negative labels each time.
    loss = tf.reduce_mean(
        tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed,
                               labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

    # Optimizer.
    # Note: The optimizer will optimize the softmax_weights AND the embeddings.
    # This is because the embeddings are defined as a variable quantity and the
    # optimizer's `minimize` method will by default modify all variable quantities 
    # that contribute to the tensor it is passed.
    # See docs on `tf.train.Optimizer.minimize()` for more details.
    optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)

    # Compute the similarity between minibatch examples and all embeddings.
    # We use the cosine distance:
    norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
    normalized_embeddings = embeddings / norm
    valid_embeddings = tf.nn.embedding_lookup(
        normalized_embeddings, valid_dataset)
    similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))

Udacity Deep Learning课程作业(五)_第2张图片

CBOW

"""
    For CBOW, given contextual words to predict central target word,
    so use bag_window instead of num_skips & skip_window
"""
data_index = 0

def generate_batch(batch_size, bag_window):
    """
    batch_size: 同skip-grams
    bag_window: 输入待预测中心词的上(下)文词个数
    """
    global data_index

    # === modified ===
    span = 2 * bag_window + 1 # [ bag_window target bag_window ]
    batch = np.ndarray(shape=(batch_size, span - 1), dtype=np.int32)
    labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
    # ================
    buffer = collections.deque(maxlen=span)
    for _ in range(span):
        buffer.append(data[data_index])
        data_index = (data_index + 1) % len(data)
    for i in range(batch_size):
        buffer_list = list(buffer)
        labels[i, 0] = buffer_list[bag_window] # buffer的中间词下标是bag_window,作为要预测的label
        batch[i] = np.append(buffer_list[:bag_window], buffer_list[bag_window+1:])
        # 更新buffer,往下平移一个单词
        buffer.append(data[data_index])
        data_index = (data_index + 1) % len(data)
    return batch, labels

print('data_index:', data[:8])
print('data:', [reverse_dictionary[di] for di in data[:8]])

for bag_window in [1, 2]:
    data_index = 0
    batch, labels = generate_batch(batch_size=8, bag_window=bag_window)
    print('\nwith bag_window = %d:' % (bag_window))
    print('    batch:', [[reverse_dictionary[w] for w in bi] for bi in batch])
    print('    labels:', [reverse_dictionary[li] for li in labels.reshape(8)])

测试结果:

data_index: [5239, 3084, 12, 6, 195, 2, 3137, 46]
data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first']

with bag_window = 1:
    batch: [['anarchism', 'as'], ['originated', 'a'], ['as', 'term'], ['a', 'of'], ['term', 'abuse'], ['of', 'first'], ['abuse', 'used'], ['first', 'against']]
    labels: ['originated', 'as', 'a', 'term', 'of', 'abuse', 'first', 'used']

with bag_window = 2:
    batch: [['anarchism', 'originated', 'a', 'term'], ['originated', 'as', 'term', 'of'], ['as', 'a', 'of', 'abuse'], ['a', 'term', 'abuse', 'first'], ['term', 'of', 'first', 'used'], ['of', 'abuse', 'used', 'against'], ['abuse', 'first', 'against', 'early'], ['first', 'used', 'early', 'working']]
    labels: ['as', 'a', 'term', 'of', 'abuse', 'first', 'used', 'against']
"""
    CBOW Model
"""
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
bag_window = 2 # 左右窗大小
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.

graph = tf.Graph()

with graph.as_default(), tf.device('/cpu:0'):

    # Input data.
    train_dataset = tf.placeholder(tf.int32, shape=[batch_size, bag_window * 2]) # 在skip-grams基础上修改训练集输入
    train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
    valid_dataset = tf.constant(valid_examples, dtype=tf.int32)

    # Variables.
    embeddings = tf.Variable(
        tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) # embeddings:词库大小*向量维度
    softmax_weights = tf.Variable(
        tf.truncated_normal([vocabulary_size, embedding_size],
                         stddev=1.0 / math.sqrt(embedding_size))) # softmax的weights:词库大小*向量维度
    softmax_biases = tf.Variable(tf.zeros([vocabulary_size])) # softmax的biases:词库大小 

    # Model.
    # Look up embeddings for inputs.
    embed = tf.nn.embedding_lookup(embeddings, train_dataset) # 按照train_dataset样本的ids顺序返回embeddings中的第ids行
    # Compute the softmax loss, using a sample of the negative labels each time.
    loss = tf.reduce_mean(
        tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=tf.reduce_sum(embed, 1), 
                               labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))
    # 在skip-grams基础上修改inpus=tf.reduce_sum(embed, 1)

    # Optimizer.
    # Note: The optimizer will optimize the softmax_weights AND the embeddings.
    # This is because the embeddings are defined as a variable quantity and the
    # optimizer's `minimize` method will by default modify all variable quantities 
    # that contribute to the tensor it is passed.
    # See docs on `tf.train.Optimizer.minimize()` for more details.
    optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)

    # Compute the similarity between minibatch examples and all embeddings.
    # We use the cosine distance:
    norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
    normalized_embeddings = embeddings / norm
    valid_embeddings = tf.nn.embedding_lookup(
        normalized_embeddings, valid_dataset)
    similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))

注释:

在skip-grams基础上,主要修改的地方有以下几个:

  • generate_batch函数,构造batch和label跟skip-grams相反,用bag_window代替num_skip和skip_window
  • tf.Graph()
    • train_dataset中,每个样本有2*bag_window维
    • 损失函数的参数修改为inpus=tf.reduce_sum(embed, 1),reduce_sum沿着1维度对tensor求和
num_steps = 100001

with tf.Session(graph=graph) as session:
    tf.global_variables_initializer().run()
    print('Initialized')
    average_loss = 0
    for step in range(num_steps):
        batch_data, batch_labels = generate_batch(
          batch_size, bag_window) # 修改参数为bag_windwow
        feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
        _, l = session.run([optimizer, loss], feed_dict=feed_dict)
        average_loss += l
        if step % 2000 == 0:
            if step > 0:
                average_loss = average_loss / 2000
            # The average loss is an estimate of the loss over the last 2000 batches.
            print('Average loss at step %d: %f' % (step, average_loss))
            average_loss = 0
            # note that this is expensive (~20% slowdown if computed every 500 steps)
            if step % 10000 == 0:
                sim = similarity.eval()
                for i in range(valid_size):
                    valid_word = reverse_dictionary[valid_examples[i]]
                    top_k = 8 # number of nearest neighbors
                    nearest = (-sim[i, :]).argsort()[1:top_k+1]
                    log = 'Nearest to %s:' % valid_word
                    for k in range(top_k):
                        close_word = reverse_dictionary[nearest[k]]
                        log = '%s %s,' % (log, close_word)
                    print(log)
    final_embeddings = normalized_embeddings.eval()

训练结果:

Initialized
Average loss at step 0: 8.123606
Nearest to known: macroeconomics, jeju, elamites, suffolk, smooth, amram, slugger, destructor,
Nearest to no: sothoth, soter, myriad, imprinted, perform, benchmarking, timeframe, cyclopaedia,
Nearest to an: macedonians, intruders, marys, experimented, marge, probus, dachshund, precedes,
Nearest to one: neptune, characterizations, canals, disquieting, ledge, avert, reservations, chem,
Nearest to three: unilaterally, howlin, abc, rental, permian, iconoclastic, collectively, bless,
Nearest to into: unsurprisingly, americana, glycine, cohabitation, easterly, protective, natives, messengers,
Nearest to are: ceremonially, aylwin, splitter, thrown, ahern, honeybee, azov, megan,
Nearest to they: peoples, crioulo, alfaro, bathroom, scavenge, firenze, conducting, desi,
Nearest to s: vinland, adys, descendant, timbuktu, canyons, marsh, bookmakers, invincible,
Nearest to UNK: cnn, fictions, cinco, taking, erase, presto, peng, egon,
Nearest to eight: orphanage, each, leaching, licking, jerusalem, assembler, sown, bawerk,
Nearest to had: auroral, spalding, nabataean, literate, louvain, ammonia, unsettling, textiles,
Nearest to up: treasure, halal, chungcheong, priori, hydrology, tr, theseus, hormones,
Nearest to be: mohandas, slogan, mantua, plaid, viper, odrade, lull, dtmf,
Nearest to a: banner, same, enoch, icebergs, condoned, timberlake, barbecue, upheavals,
Nearest to has: futility, riverside, marquis, disobeyed, loki, absinthium, stamping, remarry,
Average loss at step 2000: 4.612296
Average loss at step 4000: 3.938143
Average loss at step 6000: 3.722718
Average loss at step 8000: 3.532473
Average loss at step 10000: 3.470324
Nearest to known: used, such, seen, regarded, well, referred, ech, asatru,
Nearest to no: yours, sinner, skewed, sothoth, edict, orthogonality, chairmen, this,
Nearest to an: a, another, unwilling, the, gunpoint, any, waldemar, astarte,
Nearest to one: nine, seven, thruster, zero, gneisenau, four, prosecutors, fisk,
Nearest to three: four, seven, five, six, eight, zero, nine, two,
Nearest to into: dalhousie, from, martyn, tarski, transduction, silvio, caeiro, connotations,
Nearest to are: were, is, include, was, have, diels, drosophila, ambiguities,
Nearest to they: he, there, she, it, transporter, hallucination, done, who,
Nearest to s: auric, astrologers, vav, stepney, anticonvulsants, enigmas, boot, bobo,
Nearest to UNK: rameses, antiaircraft, doubtful, h, strau, kindly, jacobite, crest,
Nearest to eight: nine, six, seven, four, five, three, zero, two,
Nearest to had: has, have, was, were, kunst, histone, jetty, tunic,
Nearest to up: unitas, youngest, humus, chungcheong, eventually, sunspots, marrying, rancid,
Nearest to be: refer, been, have, kronos, become, provide, tei, forearm,
Nearest to a: any, an, the, this, another, skewed, detachment, sad,
Nearest to has: had, have, was, is, canisters, doubling, integrator, messengers,
Average loss at step 12000: 3.481709
Average loss at step 14000: 3.442811
Average loss at step 16000: 3.435261
Average loss at step 18000: 3.397032
Average loss at step 20000: 3.222492
Nearest to known: referred, used, seen, such, called, regarded, asatru, considered,
Nearest to no: any, precludes, sinner, oswald, nasdaq, a, considerable, tundra,
Nearest to an: another, a, the, notoc, presbyterians, katha, breeders, walks,
Nearest to one: four, seven, hunan, six, zero, cardini, eight, ported,
Nearest to three: five, seven, two, six, four, eight, nine, zero,
Nearest to into: through, cadets, from, dalhousie, warr, reactive, finlandization, transduction,
Nearest to are: were, is, include, have, was, being, drosophila, sweetener,
Nearest to they: we, you, he, she, there, these, unbeknownst, hallucination,
Nearest to s: his, mosquitos, irregularities, her, lee, miletus, enrich, goldfinger,
Nearest to UNK: ablaze, gynt, fra, wilford, alligators, mmorpgs, d, ballad,
Nearest to eight: nine, seven, six, zero, four, five, three, two,
Nearest to had: has, have, having, was, plundered, megalithic, were, robertson,
Nearest to up: dummy, back, off, them, suu, him, comparably, rarity,
Nearest to be: refer, have, being, take, move, cause, lead, provide,
Nearest to a: another, the, racket, an, every, this, melvyn, ticino,
Nearest to has: had, have, having, is, ike, was, accusation, deem,
Average loss at step 22000: 3.361741
Average loss at step 24000: 3.294026
Average loss at step 26000: 3.257730
Average loss at step 28000: 3.288970
Average loss at step 30000: 3.232994
Nearest to known: referred, regarded, seen, used, such, available, possible, identified,
Nearest to no: any, finishing, pompeius, pcbs, howland, showers, precludes, cas,
Nearest to an: another, the, rainforest, emet, a, darts, presbyterians, gunpoint,
Nearest to one: two, eight, gneisenau, aelia, chaos, five, summit, gdansk,
Nearest to three: five, six, four, eight, two, approximately, seven, nine,
Nearest to into: through, from, along, within, transduction, alta, hijackers, alexis,
Nearest to are: were, is, was, include, have, capturing, remain, tirthankaras,
Nearest to they: we, you, there, she, these, some, he, omega,
Nearest to s: alaric, his, my, variste, stephen, mosquitos, transposed, iliad,
Nearest to UNK: lean, shatt, ria, sigismund, mozart, effeminate, normale, ablaze,
Nearest to eight: nine, six, seven, four, five, zero, three, two,
Nearest to had: has, have, having, was, could, panth, gave, kunst,
Nearest to up: back, together, brotherly, rarity, dummy, schr, them, comparably,
Nearest to be: refer, lead, become, being, provide, were, kronos, move,
Nearest to a: another, the, any, this, every, dalnet, an, finn,
Nearest to has: had, have, having, is, was, requires, makes, moloch,
Average loss at step 32000: 3.017351
Average loss at step 34000: 3.201316
Average loss at step 36000: 3.199710
Average loss at step 38000: 3.161284
Average loss at step 40000: 3.167286
Nearest to known: referred, regarded, used, seen, described, such, called, available,
Nearest to no: little, any, tamer, resuscitation, opting, oswald, pompeius, amygdalin,
Nearest to an: another, a, halmos, the, gunpoint, breeders, darts, crossings,
Nearest to one: six, bogs, cryptanalysis, edwina, erc, three, oscillators, four,
Nearest to three: two, six, eight, five, four, seven, nine, zero,
Nearest to into: through, within, from, in, around, out, hijackers, across,
Nearest to are: were, is, was, include, have, drosophila, knightly, represent,
Nearest to they: we, you, he, she, attaining, franciscan, it, cavalier,
Nearest to s: oscillators, turbofans, isc, blames, mockery, suspense, mentors, zhejiang,
Nearest to UNK: floyd, starfighter, fifo, mahoney, fees, www, formality, fainting,
Nearest to eight: seven, nine, five, six, four, zero, three, two,
Nearest to had: has, have, having, was, were, saw, gave, kunst,
Nearest to up: off, down, out, schr, together, bdp, dummy, them,
Nearest to be: being, refer, lead, remain, was, been, produce, achieve,
Nearest to a: another, any, every, the, remedied, this, each, an,
Nearest to has: had, have, having, was, is, requires, includes, makes,
Average loss at step 42000: 3.213942
Average loss at step 44000: 3.146196
Average loss at step 46000: 3.147338
Average loss at step 48000: 3.057902
Average loss at step 50000: 3.061124
Nearest to known: referred, seen, used, regarded, described, available, possible, accepted,
Nearest to no: any, another, tamer, little, resuscitation, scheduling, luxuriant, considerable,
Nearest to an: another, the, breeders, hens, wizards, kenilworth, rainforest, gunpoint,
Nearest to one: decentralized, two, cryptanalysis, stanza, gneisenau, breathtaking, bogs, wolverine,
Nearest to three: four, six, five, seven, eight, zero, two, nine,
Nearest to into: through, from, around, within, across, out, between, beyond,
Nearest to are: were, is, was, include, became, be, knightly, being,
Nearest to they: we, you, he, there, she, monophosphate, obrzeg, it,
Nearest to s: his, transposed, anise, whose, tachyon, aggrieved, papen, bobo,
Nearest to UNK: www, acetylene, grumman, wilford, le, eskimos, molar, strau,
Nearest to eight: nine, seven, six, zero, five, three, four, two,
Nearest to had: has, have, having, was, saw, would, could, were,
Nearest to up: together, off, back, schr, down, out, them, rarity,
Nearest to be: refer, lead, being, remain, produce, exist, serve, provide,
Nearest to a: any, the, another, every, his, hindering, kano, this,
Nearest to has: had, have, having, is, was, requires, ike, contains,
Average loss at step 52000: 3.096429
Average loss at step 54000: 3.096638
Average loss at step 56000: 2.915055
Average loss at step 58000: 3.027401
Average loss at step 60000: 3.054121
Nearest to known: referred, seen, regarded, described, used, considered, accepted, defined,
Nearest to no: another, any, a, diminished, little, welcomes, opting, mushrooms,
Nearest to an: another, stubbornly, the, leeward, borer, damian, hens, a,
Nearest to one: two, breathtaking, vagus, erc, fy, programmability, gneisenau, cryptanalysis,
Nearest to three: five, four, seven, six, eight, two, zero, nine,
Nearest to into: through, within, from, around, across, under, beyond, slipper,
Nearest to are: were, is, include, was, have, including, remain, contain,
Nearest to they: we, you, he, she, there, these, gems, pearl,
Nearest to s: his, her, whose, their, perdition, bobo, our, beecher,
Nearest to UNK: tehran, puritanical, darrell, auditor, gina, forbids, acetylene, dei,
Nearest to eight: nine, seven, five, six, zero, three, four, two,
Nearest to had: has, have, having, was, saw, gave, wrote, could,
Nearest to up: off, back, together, samsara, out, down, attest, rarity,
Nearest to be: remain, lead, being, refer, achieve, exist, produce, provide,
Nearest to a: another, any, the, every, no, kano, singleton, remedied,
Nearest to has: had, have, having, requires, is, provides, contains, makes,
Average loss at step 62000: 3.017259
Average loss at step 64000: 2.915836
Average loss at step 66000: 2.928059
Average loss at step 68000: 2.951125
Average loss at step 70000: 3.010576
Nearest to known: referred, seen, described, used, regarded, noted, such, considered,
Nearest to no: another, arisen, cochin, categorised, any, scolded, nothing, conjuring,
Nearest to an: another, the, gunpoint, stubbornly, borer, damian, kenilworth, hens,
Nearest to one: two, arbor, fy, breathtaking, krylov, censured, sutherland, programmability,
Nearest to three: five, two, four, six, seven, zero, eight, voight,
Nearest to into: through, within, around, beyond, from, across, comprising, slipper,
Nearest to are: were, is, include, was, exist, have, capturing, volleyball,
Nearest to they: we, you, there, he, others, observers, she, distanced,
Nearest to s: his, mosquitos, their, whose, bobo, papen, conservancy, beecher,
Nearest to UNK: extinct, lazio, gestalt, acetylene, gina, middlesex, fictionalised, regina,
Nearest to eight: nine, seven, six, five, zero, three, two, four,
Nearest to had: has, have, having, canisters, wellesley, was, wrote, eventually,
Nearest to up: off, together, down, sidewinder, alger, attest, smuts, dia,
Nearest to be: refer, remain, being, lead, occur, produce, cause, exist,
Nearest to a: another, the, every, any, singleton, pyrite, his, wields,
Nearest to has: had, have, having, requires, is, contains, was, makes,
Average loss at step 72000: 2.940216
Average loss at step 74000: 2.866467
Average loss at step 76000: 2.996680
Average loss at step 78000: 3.007168
Average loss at step 80000: 2.843236
Nearest to known: referred, seen, regarded, described, used, possible, noted, cited,
Nearest to no: any, arisen, nothing, inconsistent, jis, bandy, cochin, little,
Nearest to an: another, gunpoint, damian, exponentiation, stubbornly, the, crossings, borer,
Nearest to one: breathtaking, cryptanalysis, two, addition, gf, fy, programmability, rhythmically,
Nearest to three: five, four, seven, six, eight, zero, two, nine,
Nearest to into: through, beyond, across, within, from, out, around, via,
Nearest to are: were, is, include, was, exist, have, represent, contains,
Nearest to they: we, you, he, there, she, others, i, humans,
Nearest to s: whose, his, mosquitos, my, bobo, papen, isbn, our,
Nearest to UNK: dei, du, r, en, ladakh, darrell, disturbs, certifying,
Nearest to eight: nine, seven, six, five, zero, three, four, two,
Nearest to had: has, have, having, was, wrote, contains, gave, gives,
Nearest to up: down, off, back, together, forward, cura, out, alger,
Nearest to be: remain, refer, being, lead, represent, provide, achieve, produce,
Nearest to a: another, any, the, every, this, fissile, auctioned, kano,
Nearest to has: had, have, having, contains, requires, is, was, gives,
Average loss at step 82000: 2.940803
Average loss at step 84000: 2.903593
Average loss at step 86000: 2.921089
Average loss at step 88000: 2.943981
Average loss at step 90000: 2.830520
Nearest to known: referred, regarded, described, used, seen, such, possible, cited,
Nearest to no: little, any, nothing, jis, always, laura, reliever, diminished,
Nearest to an: the, another, gunpoint, hens, any, this, borer, every,
Nearest to one: four, two, seven, three, fy, nine, numidia, toppling,
Nearest to three: four, five, six, seven, two, eight, nine, zero,
Nearest to into: through, beyond, across, within, via, from, between, transduction,
Nearest to are: were, is, include, was, exist, have, marshallese, produce,
Nearest to they: we, you, there, he, she, cloned, humanoid, pao,
Nearest to s: whose, mosquitos, lagoons, papen, willfully, conservancy, blames, tanner,
Nearest to UNK: townes, universit, milan, befitting, ch, mackenzie, coles, lister,
Nearest to eight: seven, nine, six, five, zero, four, three, chlorides,
Nearest to had: has, have, having, was, holds, contains, gave, could,
Nearest to up: off, back, down, together, attest, out, contributions, forth,
Nearest to be: being, remain, refer, represent, lead, serve, happen, occur,
Nearest to a: another, any, the, every, this, finn, kano, pyrite,
Nearest to has: had, have, requires, contains, having, provides, is, includes,
Average loss at step 92000: 2.887460
Average loss at step 94000: 2.877652
Average loss at step 96000: 2.718273
Average loss at step 98000: 2.459811
Average loss at step 100000: 2.704287
Nearest to known: referred, regarded, described, seen, remembered, used, portrayed, cited,
Nearest to no: little, nothing, another, any, nicholson, casually, always, considerable,
Nearest to an: another, stubbornly, borer, crossings, halmos, breeders, silly, the,
Nearest to one: toppling, cryptanalysis, fy, breathtaking, gabab, ioi, gneisenau, spock,
Nearest to three: four, seven, six, five, two, eight, zero, nine,
Nearest to into: through, beyond, across, from, within, down, in, out,
Nearest to are: were, is, include, was, exist, contain, have, brokerage,
Nearest to they: we, you, he, she, there, these, cheney, erroneous,
Nearest to s: his, isbn, whose, her, subcategories, papen, rimfire, tachyon,
Nearest to UNK: mare, darrell, en, raiding, root, instigation, theodore, tuba,
Nearest to eight: six, seven, nine, five, four, three, zero, two,
Nearest to had: has, have, having, was, could, contains, holds, requires,
Nearest to up: off, together, down, out, attest, back, forth, alger,
Nearest to be: remain, being, refer, lead, occur, happen, become, achieve,
Nearest to a: another, the, any, jehoiakim, every, mascarenes, kami, something,
Nearest to has: had, have, having, contains, requires, is, includes, represents,

Udacity Deep Learning课程作业(五)_第3张图片

最后总结涉及到的api的用法

tf.nn.embedding_lookup

tf.nn.embedding_lookup(
    params,
    ids,
    partition_strategy='mod',
    name=None,
    validate_indices=True,
    max_norm=None
)
  • 并行处理,按照ids顺序返回params的ids对应的行
  • partition_strategy指定划分方法,mod是对ids取模(p = id % len(params))得到一系列partition,还有种是div

tf.nn.sampled_softmax_loss

tf.nn.sampled_softmax_loss(
    weights,
    biases,
    labels,
    inputs,
    num_sampled,
    num_classes,
    num_true=1,
    sampled_values=None,
    remove_accidental_hits=True,
    partition_strategy='mod',
    name='sampled_softmax_loss'
)
  • 计算采样softmax训练损失,用于多类别的计算

还可以学习一下可视化词向量效果的代码,选了final_embdding的400个词,基于sklearn流形学习的TSNE实现降维,得到两维的two_d_embeddings,就可以在二维坐标上表示了。

from sklearn.manifold import TSNE

num_points = 400

tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000, method='exact')
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])

def plot(embeddings, labels):
    assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
    pylab.figure(figsize=(15,15))  # in inches
    for i, label in enumerate(labels):
        x, y = embeddings[i,:]
        pylab.scatter(x, y)
        pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
                       ha='right', va='bottom')
    pylab.show()

words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)

你可能感兴趣的:(深度学习)