说明:本文依据《中文自然语言处理入门实战》完成。目前网上有不少转载的课程,我是从GitChat上购买。
第五课 文本可视化技巧
算是进入正题了,NLP重要的一个环节,构建词向量模型,在这里使用到了Gensim库,安装方式很简单
pip install gensim
词袋模型BOW
词袋将文本看作一个无序的词汇集合,忽略语法和单词顺序,对每一个单词进行统计,计算词频。
用在文本分类中,贝叶斯、LDA、LSA。
手工生成词袋
import jieba
punctuation = [",", "。", ":", ";", "?", ] # 简易的标点集合
content = [
"机器学习带动人工智能飞速的发展。",
"深度学习带动人工智能飞速的发展。",
"机器学习和深度学习带动人工智能飞速的发展。"
]
seg_1=[jieba.lcut(con) for con in content]
print(seg_1)
tokenized=[]
for sentence in seg_1:
words=[]
for word in sentence:
if word not in punctuation:
words.append(word)
tokenized.append(words)
print(tokenized)
BOW=[x for item in seg_1 for x in item if x not in punctuation]
BOW=list(set(BOW))
print(BOW)
bag_of_word2vec=[]
for sentence in tokenized:
token=[1 if token in sentence else 0 for token in BOW]
bag_of_word2vec.append(token)
print(bag_of_word2vec)
可以看到生成的词袋是这样的
['的', '机器', '人工智能', '学习', '和', '飞速', '带动', '深度', '发展']
而文本内容的词向量是这样的
[[1, 1, 1, 1, 0, 1, 1, 0, 1], [1, 0, 1, 1, 0, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1]]
这其实就是一个ONE-HOT(独热)模型。很简陋。
使用Gensim生成词袋
import gensim
from gensim import corpora
import jieba
punctuation = [",", "。", ":", ";", "?", ] # 简易的标点集合
content = [
"机器学习带动人工智能飞速的发展。",
"深度学习带动人工智能飞速的发展。",
"机器学习和深度学习带动人工智能飞速的发展。"
]
seg_1 = [jieba.lcut(con) for con in content]
print(seg_1)
tokenized = []
for sentence in seg_1:
words = []
for word in sentence:
if word not in punctuation:
words.append(word)
tokenized.append(words)
print(tokenized)
dictionary = corpora.Dictionary(tokenized)
dictionary.save('./tf_logs/gensim/deerwester.dict')
print(dictionary.token2id)
corpus = [dictionary.doc2bow(sentence) for sentence in seg_1]
print(corpus)
{'人工智能': 0, '发展': 1, '学习': 2, '带动': 3, '机器': 4, '的': 5, '飞速': 6, '深度': 7, '和': 8}
[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1)], [(0, 1), (1, 1), (2, 1), (3, 1), (5, 1), (6, 1), (7, 1)], [(0, 1), (1, 1), (2, 2), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1), (8, 1)]]
对比生成结果会发现,和刚才手动生成的类似,但是工业化更高。但是问题是BOW的最大缺点是严重缺乏相似词之间的表达。