gensim是一款强大的自然语言处理工具,里面包括N多常见模型:
基本的语料处理工具
#encoding=utf-8
from gensim.models import word2vec
# 参数:fname(分词后的评论), max_sentence_length=10000
sentences = word2vec.Text8Corpus(posfile)
# 训练模型,参数:sentences=(训练语料), size=100, alpha=0.025, window=5, min_count=5,
model = word2vec.Word2Vec(sentences)
display(sentences,model)
Out:
'''测试语料库里任意2个词的相似度'''
model.similarity("好", "good")
Out:
0.40164128
model.similarity("好", "不好")
Out:
-0.016563576
'''参数,默认positive:positive=None, negative=None, topn=10,'''
for i in model.most_similar(positive="安装"):
print(i[0], i[1])
Out:
态度 0.6743848323822021
认真 0.5968624353408813
专业 0.5922791957855225
热情 0.5888502597808838
按装 0.5870505571365356
装 0.5850194692611694
配送 0.580833911895752
送货 0.5788726210594177
热心 0.5785735249519348
到货 0.5727808475494385
txt文件是已经分好词的5W条评论,训练模型只需一句话:
model=word2vec.Word2Vec(sentences,min_count=5,size=50)
第一个参数是训练语料,第二个参数是小于该数的单词会被剔除,默认值为5,
第三个参数是神经网络的隐藏层单元数,默认为100
.
model.save('word2vec_wx')
word2vec.save即可导出文件,这边没有导出为.bin
.
model = gensim.models.Word2Vec.load('xxx/word2vec_wx')
pd.Series(model.most_similar(u'微信',topn = 360000))
gensim.models.Word2Vec.load的办法导入
其中的Numpy,可以用numpy.load:
import numpy
word_2x = numpy.load('xxx/word2vec_wx.wv.syn0.npy')
还有其他的导入方式:
from gensim.models.keyedvectors import KeyedVectors
word_vectors = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False) # C text format
word_vectors = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True) # C binary format
导入txt格式+bin格式。
.
model = gensim.models.Word2Vec.load('/tmp/mymodel')
model.train(more_sentences)
不能对C生成的模型进行再训练.
.
持数种单词相似度任务:
相似词+相似系数(model.most_similar)、model.doesnt_match、model.similarity(两两相似)
model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
[('queen', 0.50882536)]
model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal'
model.similarity('woman', 'man')
.73723527
2、词向量
通过以下方式来得到单词的向量:
model['computer'] # raw NumPy vector of a word
array([-0.00449447, -0.00310097, 0.02421786, ...], dtype=float32)
训练过程:
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import pymongo
import hashlib
db = pymongo.MongoClient('172.16.0.101').weixin.text_articles_words
md5 = lambda s: hashlib.md5(s).hexdigest()
class sentences:
def __iter__(self):
texts_set = set()
for a in db.find(no_cursor_timeout=True):
if md5(a['text'].encode('utf-8')) in texts_set:
continue
else:
texts_set.add(md5(a['text'].encode('utf-8')))
yield a['words']
print u'最终计算了%s篇文章'%len(texts_set)
word2vec = gensim.models.word2vec.Word2Vec(sentences(), size=256, window=10, min_count=64, sg=1, hs=1, iter=10, workers=25)
word2vec.save('word2vec_wx')
这里引入hashlib.md5是为了对文章进行去重(本来1000万篇文章,去重后得到800万),而这个步骤不是必要的。