自然语言处理之中英文分词

编程环境:

anaconda + python3.7
完整代码及数据已经更新至GitHub,欢迎fork~GitHub链接


声明:创作不易,未经授权不得复制转载
statement:No reprinting without authorization


中文分词工具
1、Jieba(重点),三种分词模式与自定义词典功能
2、SnowNLP
3、THULAC
4、NLPIR:https://github.com/tsroten/pynlpir
5、NLPIR:https://blog.csdn.net/weixin_34613450/article/details/78695166
6、StanfordCoreNLP
7、HanLP(需要额外安装 Microsoft Visual C++ 14.0) 或安装教程
英文分词工具
1、NLTK:
http://www.nltk.org/index.html
https://github.com/nltk/nltk
https://www.jianshu.com/p/9d232e4a3c28

主要内容:

       利用给定的中英文文本序列(见 Chinese.txt 和 English.txt),分别利用以下给定的中 英文分词工具进行分词并对不同分词工具产生的结果进行简要对比分析。

一、在python环境中安装好各种工具包:

主要利用到pip命令,一般都能快速成功安装。其中有几个需注意的地方:

(1)第一是Stanfordcorenlp的安装过程中:

       首先要配置好java环境,下载安装1.8版本以上的JDK;配置好Java的环境变量Path和java_home等,需要注意最好重启让系统环境变量生效;否则可能遇到如下报错:


image.png

       而后需要下载外接文件包,注意python的版本下载对应的包,而后进行解压,对中文进行处理时需要再另外下载一个对应的Chinese的jar包,放入之前的解压文件夹中。

(2)第二是spacy安装的过程中,可能会遇到权限不够的提示,

方法1是:要用管理员模式启动命令行
方法2是:用 nlp = spacy.load('en_core_web_sm')代替原来的nlp = spacy.load(’en’)

(3)第三是注意文字编码:

有些工具包需要指定为Unicode的编码模式,不然可能会有一些问题。

二、编写代码运行测试:

# -*- coding: utf-8 -*-
"""
Created on Tue Mar 18 09:43:47 2019
@author: Mr.relu
"""
import time
import jieba
from snownlp import SnowNLP
import thulac   
import pynlpir
from stanfordcorenlp import StanfordCoreNLP
import nltk
import spacy 
spacy_nlp = spacy.load('en_core_web_sm')

f = open('Chinese.txt')
document = f.read()
f.close()
print(document)    

"""
测试工具包jieba
"""
print(">>>>>jieba tokenization start...")
start = time.process_time()

seg_list = jieba.cut(str(document), cut_all=True)

elapsed = (time.process_time() - start)
print("jieba 01 Time used:",elapsed)

print("《《jieba Full Mode》》: \n" + "/ ".join(seg_list))  # 全模式

start = time.process_time()
seg_list = jieba.cut(document, cut_all=False)
elapsed = (time.process_time() - start)
print("jieba 02 Time used:",elapsed)
print("《《jieba Default Mode》》: \n" + "/ ".join(seg_list))  # 精确模式

start = time.process_time()
seg_list = jieba.cut_for_search(document)  # 搜索引擎模式
elapsed = (time.process_time() - start)
print("jieba 03 Time used:",elapsed)
print("《《jieba Search Model》》: \n" + "/ ".join(seg_list))

"""
测试工具包SnowNLP
"""
print(">>>>>SnowNLP tokenization start...")
start = time.process_time()
s = SnowNLP(document)
result = s.words                    # [u'这个', u'东西', u'真心',
elapsed = (time.process_time() - start)
print("SnowNLP Time used:",elapsed)
print("《《SnowNLP》》: \n" + "/ ".join(result))                        #  u'很', u'赞']
#result = s.tags          # [(u'这个', u'r'), (u'东西', u'n'),
#print(result)                #  (u'真心', u'd'), (u'很', u'd'),
#                        #  (u'赞', u'Vg')]
#result = s.sentiments    # 0.9769663402895832 positive的概率
#print(result)
#result = s.pinyin        # [u'zhe', u'ge', u'dong', u'xi',
#print(result)                #  u'zhen', u'xin', u'hen', u'zan']
#s = SnowNLP(u'「繁體字」「繁體中文」的叫法在臺灣亦很常見。')
#
#s.han           # u'「繁体字」「繁体中文」的叫法
#                # 在台湾亦很常见。'
"""
测试thulac工具包
"""
print(">>>>>thulac tokenization start...")
start = time.process_time()
thu1 = thulac.thulac(seg_only=True)  #默认模式
text = thu1.cut(document, text=True)  #进行一句话分词
elapsed = (time.process_time() - start)
print("thulac Time used:",elapsed)
print("《《thulac》》: \n" + "/ ".join(text))    
#thu1 = thulac.thulac(seg_only=True)  #只进行分词,不进行词性标注
#thu1.cut_f("Chinese.txt", "output.txt")  #对input.txt文件内容进行分词,输出到output.txt

"""
测试pynlpir工具包
"""
print(">>>>>pynlpir tokenization start...")
start = time.process_time()
pynlpir.open()
resu = pynlpir.segment(document,pos_tagging=False)

elapsed = (time.process_time() - start)
print("pynlpir Time used:",elapsed)

print("《《pynlpir》》: \n" + "/ ".join(resu)) 
"""
pynlpir.segment(s, pos_tagging=True, pos_names=‘parent‘, pos_english=True)
pynlpir.get_key_words(s, max_words=50, weighted=False)
分詞:pynlpir.segment(s, pos_tagging=True, pos_names=‘parent‘, pos_english=True) 
S: 句子 
pos_tagging:是否進行詞性標註 
pos_names:顯示詞性的父類(parent)還是子類(child) 或者全部(all) 
pos_english:詞性顯示英語還是中文
获取关键词:pynlpir.get_key_words(s, max_words=50, weighted=False) 
s: 句子 
max_words:最大的關鍵詞數 
weighted:是否顯示關鍵詞的權重
"""
"""
测试StanfordCoreNLP工具包
"""
print(">>>>>StanfordCoreNLP tokenization start...")
start = time.process_time()
nlp = StanfordCoreNLP(r'D:\anaconda\Lib\stanford-corenlp-full-2018-02-27',lang = 'zh')
outWords = nlp.word_tokenize(document)
elapsed = (time.process_time() - start)
print("StanfordCoreNLP Time used:",elapsed)
print("《《StanfordCoreNLP》》: \n" + "/ ".join(outWords))
#print 'Part of Speech:', nlp.pos_tag(sentence)
#print 'Named Entities:', nlp.ner(sentence)
#print 'Constituency Parsing:', nlp.parse(sentence)
#print 'Dependency Parsing:', nlp.dependency_parse(sentence)
nlp.close() # Do not forget to close! The backend server will consume a lot memery.

"""
英文分词NLTK
"""
f = open('English.txt')
doc = f.read()
f.close()
print(doc)

print(">>>>>NLTK tokenization start...")
start = time.process_time()
tokens = nltk.word_tokenize(doc)
elapsed = (time.process_time() - start)
print("NLTK Time used:",elapsed)
print("《《NLTK》》: \n" + "/ ".join(tokens))

"""
英文分词spacy
"""
print(">>>>>spacy tokenization start...")
start = time.process_time()
s_doc = spacy_nlp(doc)
elapsed = (time.process_time() - start)
print("spacy Time used:",elapsed)

token_doc =[]
for token in s_doc:
    token = str(token)
    token_doc.append(token)
print("《《Spacy》》: \n" + "/ ".join(token_doc))

"""
英文分词StanfordCoreNLP
"""
print(">>>>>StanfordCoreNLP tokenization start...")
start = time.process_time()
nlp2 = StanfordCoreNLP(r'D:\anaconda\Lib\stanford-corenlp-full-2018-02-27')
outWords = nlp2.word_tokenize(doc)
elapsed = (time.process_time() - start)
print("StanfordCoreNLP Time used:",elapsed)
print("《《StanfordCoreNLP>>: \n" + "/ ".join(outWords))
nlp2.close() # Do not forget to close! The backend server will consume a lot memery.

部分结果展示:

result

result

你可能感兴趣的:(自然语言处理之中英文分词)