gensim是一款强大的自然语言处理工具,里面包括N多常见模型:
- 基本的语料处理工具
- LSI
- LDA
- HDP
- DTM
- DIM
- TF-IDF
- word2vec、paragraph2vec
.
#encoding=utf-8
from gensim.models import word2vec
sentences=word2vec.Text8Corpus(u'分词后的爽肤水评论.txt')
model=word2vec.Word2Vec(sentences, size=50)
y2=model.similarity(u"好", u"还行")
print(y2)
for i in model.most_similar(u"滋润"):
print i[0],i[1]
txt文件是已经分好词的5W条评论,训练模型只需一句话:
model=word2vec.Word2Vec(sentences,min_count=5,size=50)
第一个参数是训练语料,第二个参数是小于该数的单词会被剔除,默认值为5,
第三个参数是神经网络的隐藏层单元数,默认为100
.
word2vec = gensim.models.word2vec.Word2Vec(sentences(), size=256, window=10, min_count=64, sg=1, hs=1, iter=10, workers=25)
word2vec.save('word2vec_wx')
word2vec.save即可导出文件,这边没有导出为.bin
.
model = gensim.models.Word2Vec.load('xxx/word2vec_wx')
pd.Series(model.most_similar(u'微信',topn = 360000))
gensim.models.Word2Vec.load的办法导入
其中的Numpy,可以用numpy.load:
import numpy
word_2x = numpy.load('xxx/word2vec_wx.wv.syn0.npy')
还有其他的导入方式:
from gensim.models.keyedvectors import KeyedVectors
word_vectors = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False) # C text format
word_vectors = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True) # C binary format
导入txt格式+bin格式。
.
model = gensim.models.Word2Vec.load('/tmp/mymodel')
model.train(more_sentences)
不能对C生成的模型进行再训练.
.
持数种单词相似度任务:
相似词+相似系数(model.most_similar)、model.doesnt_match、model.similarity(两两相似)
model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
[('queen', 0.50882536)]
model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal'
model.similarity('woman', 'man')
.73723527
.
通过以下方式来得到单词的向量:
model['computer'] # raw NumPy vector of a word
array([-0.00449447, -0.00310097, 0.02421786, ...], dtype=float32)
.
训练过程:
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import pymongo
import hashlib
db = pymongo.MongoClient('172.16.0.101').weixin.text_articles_words
md5 = lambda s: hashlib.md5(s).hexdigest()
class sentences:
def __iter__(self):
texts_set = set()
for a in db.find(no_cursor_timeout=True):
if md5(a['text'].encode('utf-8')) in texts_set:
continue
else:
texts_set.add(md5(a['text'].encode('utf-8')))
yield a['words']
print u'最终计算了%s篇文章'%len(texts_set)
word2vec = gensim.models.word2vec.Word2Vec(sentences(), size=256, window=10, min_count=64, sg=1, hs=1, iter=10, workers=25)
word2vec.save('word2vec_wx')
这里引入hashlib.md5是为了对文章进行去重(本来1000万篇文章,去重后得到800万),而这个步骤不是必要的。
.
基于python的gensim word2vec训练词向量
Gensim Word2vec 使用教程
官方教程:http://radimrehurek.com/gensim/models/word2vec.html