转载:https://blog.csdn.net/xiexf189/article/details/79092629
Python 3.6.0 |Anaconda 4.3.1 (64-bit)
jupyter notebook
首先引入分词API库jieba、文本相似度库gensim
import jieba
from gensim import corpora,models,similarities
以下doc0-doc7是几个最简单的文档,我们可以称之为目标文档,本文就是分析doc_test(测试文档)与以上8个文档的相似度。
doc0 = "我不喜欢上海"
doc1 = "上海是一个好地方"
doc2 = "北京是一个好地方"
doc3 = "上海好吃的在哪里"
doc4 = "上海好玩的在哪里"
doc5 = "上海是好地方"
doc6 = "上海路和上海人"
doc7 = "喜欢小吃"
doc_test="我喜欢上海的小吃"
首先,为了简化操作,把目标文档放到一个列表all_doc中。
all_doc = []
all_doc.append(doc0)
all_doc.append(doc1)
all_doc.append(doc2)
all_doc.append(doc3)
all_doc.append(doc4)
all_doc.append(doc5)
all_doc.append(doc6)
all_doc.append(doc7)
以下对目标文档进行分词,并且保存在列表all_doc_list中
all_doc_list = []
for doc in all_doc:
doc_list = [word for word in jieba.cut(doc)]
all_doc_list.append(doc_list)
把分词后形成的列表显示出来:
print(all_doc_list)
[[‘我’, ‘不’, ‘喜欢’, ‘上海’],
[‘上海’, ‘是’, ‘一个’, ‘好’, ‘地方’],
[‘北京’, ‘是’, ‘一个’, ‘好’, ‘地方’],
[‘上海’, ‘好吃’, ‘的’, ‘在’, ‘哪里’],
[‘上海’, ‘好玩’, ‘的’, ‘在’, ‘哪里’],
[‘上海’, ‘是’, ‘好’, ‘地方’],
[‘上海’, ‘路’, ‘和’, ‘上海’, ‘人’],
[‘喜欢’, ‘小吃’]]
以下把测试文档也进行分词,并保存在列表doc_test_list中
doc_test_list = [word for word in jieba.cut(doc_test)]
doc_test_list
[‘我’, ‘喜欢’, ‘上海’, ‘的’, ‘小吃’]
首先用dictionary方法获取词袋(bag-of-words)
dictionary = corpora.Dictionary(all_doc_list)
词袋中用数字对所有词进行了编号
dictionary.keys()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
编号与词之间的对应关系
dictionary.token2id
{‘一个’: 4,
‘上海’: 0,
‘不’: 1,
‘人’: 14,
‘北京’: 8,
‘和’: 15,
‘哪里’: 9,
‘喜欢’: 2,
‘在’: 10,
‘地方’: 5,
‘好’: 6,
‘好吃’: 11,
‘好玩’: 13,
‘小吃’: 17,
‘我’: 3,
‘是’: 7,
‘的’: 12,
‘路’: 16}
以下使用doc2bow制作语料库
corpus = [dictionary.doc2bow(doc) for doc in all_doc_list]
语料库如下。语料库是一组向量,向量中的元素是一个二元组(编号、频次数),对应分词后的文档中的每一个词。
[[(0, 1), (1, 1), (2, 1), (3, 1)],
[(0, 1), (4, 1), (5, 1), (6, 1), (7, 1)],
[(4, 1), (5, 1), (6, 1), (7, 1), (8, 1)],
[(0, 1), (9, 1), (10, 1), (11, 1), (12, 1)],
[(0, 1), (9, 1), (10, 1), (12, 1), (13, 1)],
[(0, 1), (5, 1), (6, 1), (7, 1)],
[(0, 2), (14, 1), (15, 1), (16, 1)],
[(2, 1), (17, 1)]]
以下用同样的方法,把测试文档也转换为二元组的向量
doc_test_vec = dictionary.doc2bow(doc_test_list)
doc_test_vec
[(0, 1), (2, 1), (3, 1), (12, 1), (17, 1)]
使用TF-IDF模型对语料库建模
tfidf = models.TfidfModel(corpus)
获取测试文档中,每个词的TF-IDF值
tfidf[doc_test_vec]
[(0, 0.08112725037593049),
(2, 0.3909393754390612),
(3, 0.5864090631585919),
(12, 0.3909393754390612),
(17, 0.5864090631585919)]
对每个目标文档,分析测试文档的相似度
index = similarities.SparseMatrixSimilarity(tfidf[corpus], num_features=len(dictionary.keys()))
sim = index[tfidf[doc_test_vec]]
sim
array([ 0.54680777, 0.01055349, 0. , 0.17724207, 0.17724207,
0.01354522, 0.01279765, 0.70477605], dtype=float32)
根据相似度排序
sorted(enumerate(sim), key=lambda item: -item[1])
[(7, 0.70477605),
(0, 0.54680777),
(3, 0.17724207),
(4, 0.17724207),
(5, 0.013545224),
(6, 0.01279765),
(1, 0.010553493),
(2, 0.0)]
从分析结果来看,测试文档与doc7相似度最高,其次是doc0,与doc2的相似度为零。大家可以根据TF-IDF的原理,看看是否符合预期。
最后总结一下文本相似度分析的步骤:
-----------------------------------------------------------------------------------------------------------------------------
如下为自己调试:
import jieba
from gensim import corpora,models,similarities
doc0 = "我不喜欢上海"
doc1 = "上海是一个好地方"
doc2 = "北京是一个好地方"
doc3 = "上海好吃的在哪里"
doc4 = "上海好玩的在哪里"
doc5 = "上海是好地方"
doc6 = "上海路和上海人"
doc7 = "喜欢小吃"
doc_test="我喜欢上海的小吃"
all_doc = []
all_doc.append(doc0)
all_doc.append(doc1)
all_doc.append(doc2)
all_doc.append(doc3)
all_doc.append(doc4)
all_doc.append(doc5)
all_doc.append(doc6)
all_doc.append(doc7)
all_doc_list = []
for doc in all_doc:
doc_list = [word for word in jieba.cut(doc)]
all_doc_list.append(doc_list)
print(all_doc_list)
doc_test_list = [word for word in jieba.cut(doc_test)]
print(doc_test_list)
dictionary = corpora.Dictionary(all_doc_list)
print (dictionary.keys())
print (dictionary.token2id)
corpus = [dictionary.doc2bow(doc) for doc in all_doc_list]
doc_test_vec = dictionary.doc2bow(doc_test_list)
print(doc_test_vec)
tfidf = models.TfidfModel(corpus)
print(tfidf[doc_test_vec])
index = similarities.SparseMatrixSimilarity(tfidf[corpus], num_features=len(dictionary.keys()))
print(index)
print(index.num_best)
print(len(dictionary.keys()))
sim = index[tfidf[doc_test_vec]]
print(sim)
print(sorted(enumerate(sim), key=lambda item: -item[1]))
调试的LOG:
D:\program\anaconda3\python.exe D:/PYTHON/untitled/exercise2.py
D:\program\anaconda3\lib\site-packages\gensim\utils.py:1197: UserWarning: detected Windows; aliasing chunkize to chunkize_serial
warnings.warn("detected Windows; aliasing chunkize to chunkize_serial")
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\XUEFEI~1.ZHA\AppData\Local\Temp\jieba.cache
Loading model cost 0.809 seconds.
Prefix dict has been built succesfully.
[['我', '不', '喜欢', '上海'], ['上海', '是', '一个', '好', '地方'], ['北京', '是', '一个', '好', '地方'], ['上海', '好吃', '的', '在', '哪里'], ['上海', '好玩', '的', '在', '哪里'], ['上海', '是', '好', '地方'], ['上海', '路', '和', '上海', '人'], ['喜欢', '小吃']]
['我', '喜欢', '上海', '的', '小吃']
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
{'上海': 0, '不': 1, '喜欢': 2, '我': 3, '一个': 4, '地方': 5, '好': 6, '是': 7, '北京': 8, '哪里': 9, '在': 10, '好吃': 11, '的': 12, '好玩': 13, '人': 14, '和': 15, '路': 16, '小吃': 17}
[(0, 1), (2, 1), (3, 1), (12, 1), (17, 1)]
[(0, 0.081127250375930493), (2, 0.39093937543906121), (3, 0.58640906315859187), (12, 0.39093937543906121), (17, 0.58640906315859187)]
None
18
[ 0.54680777 0.01055349 0. 0.17724207 0.17724207 0.01354522
0.01279765 0.70477605]
[(7, 0.70477605), (0, 0.54680777), (3, 0.17724207), (4, 0.17724207), (5, 0.013545224), (6, 0.01279765), (1, 0.010553493), (2, 0.0)]
Process finished with exit code 0
分析:
sim = index[tfidf[doc_test_vec]]
print(sim)
//得出测试文档doc_test_vec和每一个现有文档的相似度。相似度的结果按照预料文档的顺序排列。
[ 0.54680777 0.01055349 0. 0.17724207 0.17724207 0.01354522
0.01279765 0.70477605]
//对应0-7文档与测试文档的相似度。从中可的文档2与测试文档的相似度为
print(sorted(enumerate(sim), key=lambda item: -item[1]))
对相似度按照从大到小的顺序进行排序。
[(7, 0.70477605), (0, 0.54680777), (3, 0.17724207), (4, 0.17724207), (5, 0.013545224), (6, 0.01279765), (1, 0.010553493), (2, 0.0)]