word2Vec用来训练词向量

训练模型

sentences = word2vec.Text8Corpus(input_corpus)# 加载语料,input_corpus 为路径

#常用参数介绍: size词向量维度、window滑动窗口大小上下文最大距离、min_count最小词频数、iter随机梯度下降迭代最小次数

model = word2vec.Word2Vec(sentences, size=100, window=8, min_count=3, iter=8)

模型训练结果

中国2.4051034 2.347189 2.1492286 2.36416 3.3391545 2.7728064 2.1600752 -0.97178006 3.3161209 -1.4273282 -1.460929 1.7023774 0.39640304 0.92302006 -0.44064146 -1.2462761 0.71472687 -2.0091395 -1.1625484 1.9183346 1.3589659 4.492623 -0.06471016 -0.67374414 0.5702634 -0.04561443 2.6501563 -0.6086548 -1.8949013 2.2059002 3.5559614 2.8981128 -0.37763736 -1.7835602 -1.0494096 1.8594857 -1.0929657 2.1960757 -2.266795 -3.6154387 1.9028498 1.5598435 -0.16755931 -2.4086187 4.748276 -2.827977 2.9857802 5.122005 0.7531201 2.049602 -0.5398894 -3.319249 -0.37066358 0.16588122 1.8525156 -4.531679 -1.2304896 -2.8112302 2.799388 -0.1128152 -0.9057815 1.0820556 -3.2845974 -0.34189522 2.1741004 -2.8306067 1.2236092 0.39991888 0.03834511 3.3192902 -1.5873901 -1.866539 -0.11960881 0.010244962 0.16474022 2.6132228 1.2568957 -1.685334 1.9155722 -1.5563394 -1.9100558 -2.6324818 3.0862947 -0.33642867 1.5173916 -2.5618932 -5.528164 1.9867828 0.43513966 0.24367392 -0.6689725 1.7407004 -4.5762343 0.41930607 -2.1844933 1.5136248 -0.33260316 0.58439684 -4.691953 -1.5455776

模型调用举例

model = KeyedVectors.load_word2vec_format(model_path)

print('similarity(美国,美元) = {}'.format(model.similarity("美国", "美元")))

print('similarity(美国,不错) = {}'.format(model.similarity("不错", "糟糕")))

most_sim = model.most_similar("美国", topn=10)

print('The top10 of 美国: {}'.format(most_sim))

输入结果:

similarity(美国,美元) = 0.3204698849918094

similarity(美国,不错) = 0.10450708907409183

The top10 of 美国: [('英国', 0.6424108147621155), ('纳扎尔巴耶夫', 0.6381773352622986), ('戈尔巴乔夫', 0.6322702169418335), ('各国', 0.6312756538391113), ('沙特', 0.6292957067489624), ('江布尔', 0.6286799907684326), ('澳', 0.6259944438934326), ('独立', 0.6243587136268616), ('乌克兰', 0.621656060218811), ('澳大利亚', 0.6174627542495728)]

你可能感兴趣的:(word2Vec用来训练词向量)