word2vec模型训练过程

参考博客https://blog.csdn.net/vivian_ll/article/details/89914219

1.先下载中文维基百科的原始数据 https://dumps.wikimedia.org/zhwiki/

2.用WikiExtractor.py脚本提取正文https://github.com/attardi/wikiextractor/blob/master/WikiExtractor.py
把这里的代码复制过来,然后在cmd中运行,

python WikiExtractor.py -b 500M -o zhwiki zhwiki-20180720-pages-articles.xml.bz2

-b是切分的大小。-o是保存的目录

3.用opencc进行繁简转化,先下载对应版本的opencc然后把之前切分好的文件复制到bin目录,将bin目录加入环境变量里,然后在bin目录运行

.\opencc -i wiki_00 -o zh_wiki_00 -c 你保存的目录\opencc-1.0.4\share
\opencc\t2s.json
.\opencc -i wiki_01 -o zh_wiki_01 -c 你保存的目录\opencc-1.0.4\share
\opencc\t2s.json
.\opencc -i wiki_02 -o zh_wiki_02 -c 你保存的目录\opencc-1.0.4\share
\opencc\t2s.json

4.然后借用去符号的代码去除里面的符号

#!/usr/bin/python
# -*- coding: utf-8 -*-
import re
import sys
import codecs
from imp import reload
def myfun(input_file):
    p1 = re.compile(r'-\{.*?(zh-hans|zh-cn):([^;]*?)(;.*?)?\}-')
    p2 = re.compile(r'[(\(][,;。?!\s]*[)\)]')
    p3 = re.compile(r'[「『]')
    p4 = re.compile(r'[」』]')
    outfile = codecs.open('std_zh_wiki', 'a+', 'utf-8')
    with codecs.open(input_file, 'r', 'utf-8') as myfile:
        for line in myfile:
            line = p1.sub(r'\2', line)
            line = p2.sub(r'', line)
            line = p3.sub(r'“', line)
            line = p4.sub(r'”', line)
            outfile.write(line)
    outfile.close()
if __name__ == '__main__':
    if len(sys.argv) != 2:
        print "Usage: python script.py inputfile"
        sys.exit()
    reload(sys)
    
    input_file = sys.argv[1]
    myfun(input_file)

5.接下来就是去停用词和分词,这一步所需时间比较长

import jieba

# 创建停用词列表
def stopwordslist():
    stopwords = [line.strip() for line in open('D:/zhwiki/AA/stopword.txt',encoding='UTF-8').readlines()]
    return stopwords

# 对句子进行中文分词
def seg_depart(sentence):
    # 对文档中的每一行进行中文分词
    print("正在分词")
    sentence_depart = jieba.cut(sentence.strip())
 
    # 创建一个停用词列表
    stopwords = stopwordslist()
    # 输出结果为outstr
    outstr = ''
    # 去停用词
    for word in sentence_depart:
        if word not in stopwords:
            if word != '\t':
                outstr += word
                outstr += " "

    return outstr

# 给出文档路径
filename = "D:/zhwiki/AA/std_zh_wiki"
outfilename = "D:/zhwiki/AA/cut_std_zh_wiki"
inputs = open(filename, 'r', encoding='UTF-8')
outputs = open(outfilename, 'w', encoding='UTF-8')

# 将输出结果写入ou.txt中
for line in inputs:
    line_seg = seg_depart(line)
    line_seg2=str(line_seg) + "\n"
    outputs.write(line_seg2)
    print("-------------------正在分词和去停用词-----------")
outputs.close()
inputs.close()
print("删除停用词和分词成功!!!")

6.完成以上步骤以后就可以进行模型的训练了

from gensim.models import word2vec
import logging

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = word2vec.LineSentence(u'./cut_std_zh_wiki_00')
model = word2vec.Word2Vec(sentences,size=200,window=5,min_count=5,workers=4)
model.save('./word2vecModel/WikiCHModel')

你可能感兴趣的:(word2vec模型训练过程)