Tencent_AILab_ChineseEmbedding.txt使用

正在做问答系统,看到腾讯正式开源一个大规模、高质量的中文词向量数据集Tencent_AILab_ChineseEmbedding.txt,简直喜极而泣。下载地址:https://ai.tencent.com/ailab/nlp/embedding.html ,里边有对数据集的介绍还有论文的下载地址。
迅速写了一个代码,用在我自己的问答系统中,效果嘛还在训练,初始几步的loss确实比之前随机初始化下降要快,将使用代码记录一下:

def loadEmbedding(sess, embeddingFile, word2id, embeddingSize):
    """ Initialize embeddings with pre-trained word2vec vectors
    Will modify the embedding weights of the current loaded model
    sess:会话
    embeddingFile:Tencent_AILab_ChineseEmbedding.txt的路径
    word2id:自己数据集中的word2id
    embeddingSize: 词向量的维度,我这里直接设置的200,和原始一样,低于200的采用我屏蔽掉的代码应该可以,我还没测
    
    """
    with tf.name_scope('embedding_layer'):
        embedding = tf.get_variable('embedding',
                                    [len(word2id), embeddingSize])
    # New model, we load the pre-trained word2vec data and initialize embeddings
    print("Loading pre-trained word embeddings from %s " % embeddingFile)
    with open(embeddingFile, "r", encoding='ISO-8859-1') as f:
        header = f.readline()
        vocab_size, vector_size = map(int, header.split())
        initW = np.random.uniform(-0.25,0.25,(len(word2id), vector_size))
        for i in tqdm(range(vocab_size)):
            line = f.readline()
            lists = line.split(' ')
            word = lists[0]
            if word in word2id:
                number = map(float, lists[1:])
                number = list(number)
                vector = np.array(number)
                initW[word2id[word]] = vector

    # # PCA Decomposition to reduce word2vec dimensionality
    # if embeddingSize < vector_size:
    #     U, s, Vt = np.linalg.svd(initW, full_matrices=False)
    #     S = np.zeros((vector_size, vector_size), dtype=complex)
    #     S[:vector_size, :vector_size] = np.diag(s)
    #     initW = np.dot(U[:, :embeddingSize], S[:embeddingSize, :embeddingSize])
    
    # Initialize input and output embeddings
    sess.run(embedding.assign(initW))

你可能感兴趣的:(Deep,Learning,python,问答系统)