Python自然语言处理:文档相似度计算(gensim.models)

目录

  • 1. tf-idf(每个文档形成一个tfidf向量)
  • 2. 仅频率(每个文档形成一个频率值向量)
  • 3. 仅出现与否(每个文档形成一个出现与否的二元向量)
  • 4. Word2vec模型(每个词形成一个向量)
  • 5. Doc2vec模型(每个词或者句子形成一个向量,可以出现未登录词)
  • 6. N元模型(n-gram)
  • 附录:gensim.models中的所有模型简介

本文对Python的第三方库gensim中的文档相似度计算方法进行探索。
官方文档见:
https://github.com/RaRe-Technologies/gensim/tree/develop/gensim/models

1. tf-idf(每个文档形成一个tfidf向量)

# 引入第三方库
import jieba
import os
import jieba.posseg as pseg
from gensim import corpora, models, similarities
import math
import pandas as pd
import matplotlib.pyplot as plt#约定俗成的写法plt
import numpy as np
from tqdm import tqdm
import datetime
import seaborn as sns
sns.set(font='SimSun',font_scale=1.5, palette="muted", color_codes=True, style = 'white')#字体  Times New Roman   SimHei
#解决中文显示问题
plt.rcParams['font.sans-serif']=['SimSun']
plt.rcParams['axes.unicode_minus'] = False
plt.rcParams['mathtext.fontset'] = 'cm'
# %matplotlib inline
from scipy import sparse

整个流程:

# 1. 引入数据
df = pd.read_csv('noun_index.csv')
text = df[['text_need']].to_list()
texts = [eval(i) for i in texts]
# 2、基于文件集建立【词典】,并提取词典特征数
dictionary = corpora.Dictionary(texts)
# 可以查看每个词对应的index
# dictionary.token2id
feature_cnt = len(dictionary.token2id.keys())
# 3、基于词典,将【分词列表集】转换为【稀疏向量集】,也就是【语料库】
corpus = [dictionary.doc2bow(text) for text in texts]# corpus#列表的列表,每个元素代表位置+频率
# 4. 训练tf-idf模型
tfidf_model = models.TfidfModel(corpus)
# 5. 用训练好的tf-idf模型,计算每个文档的tfidf值向量
corpus_tfidf = tfidf_model[corpus]
# 6. 计算给定tf-idf模型下的相似度模型(index)
index = similarities.SparseMatrixSimilarity(corpus_tfidf, num_features=len(dictionary.keys()))

① 计算corpus中的一个文档与corpus中每个文档的相似度:

i = 0 # 看第一篇文档跟corpus中所有文档的相似度
doc_text_vec = corpus[i]
sim = index[tfidf_model[doc_text_vec]]# 一个向量,长度是corpus所有文档的数量,0号文档与0号文档的相似度为1

计算的结果sim是:

array([9.9999988e-01, 2.3108754e-08, 1.1747384e-02, ..., 1.2266420e-01,
       1.4046666e-02, 9.9481754e-02], dtype=float32)

② 计算任意一个字符串与corpus中每个文档的相似度:

#测试字符串分词并获取词袋函数
test_string='少年进步则国进步'
test_doc_list=[word for word in jieba.cut(test_string)]
test_doc_vec = dictionary.doc2bow(test_doc_list)
sim=index[tfidf_model[test_doc_vec]]

返回的结果sim是:

array([0.        , 0.        , 0.        , ..., 0.        , 0.01903304,
       0.        ], dtype=float32)

2. 仅频率(每个文档形成一个频率值向量)

与tf-idf模型的前半部分处理过程一致,但是不将频率向量训练为tf-idf向量。

# 0. 引入数据
df = pd.read_csv('noun_index.csv')
text = df[['text_need']].to_list()
texts = [eval(i) for i in texts]
# 1、基于文件集建立【词典】,并提取词典特征数
dictionary = corpora.Dictionary(texts)
# 可以查看每个词对应的index
# dictionary.token2id
feature_cnt = len(dictionary.token2id.keys())
# 2、基于词典,将【分词列表集】转换为【稀疏向量集】,也就是【语料库】
corpus = [dictionary.doc2bow(text) for text in texts]# corpus#列表的列表,bow: 每个元素代表位置+频率; idx:每个元素的非零位置
# 3、取出其中2个向量(以稀疏向量形式来表示)
from scipy import sparse
vector = sparse.dok_matrix((1,len(dictionary)), dtype=np.float32)
result = corpus[0]
for i in range(len(result)):
    vector[0,result[i][0]] = result[i][-1]
    
vector1 = sparse.dok_matrix((1,len(dictionary)), dtype=np.float32)
result1 = corpus[1]
for i in range(len(result1)):
    vector1[0,result1[i][0]] = result1[i][-1]```
# 4、计算二者的余弦相似度
from sklearn.metrics.pairwise import cosine_similarity
sim = cosine_similarity(vector,vector1)

返回的结果sim为:

array([[0.32762548]], dtype=float32)

3. 仅出现与否(每个文档形成一个出现与否的二元向量)

与tf-idf模型的前半部分处理过程类似,但是不采用频率向量,而是采用0-1向量(出现该单词取1否则为0),也不将二元向量训练为tf-idf向量。

# 0. 引入数据
df = pd.read_csv('noun_index.csv')
text = df[['text_need']].to_list()
texts = [eval(i) for i in texts]
# 1、基于文件集建立【词典】,并提取词典特征数
dictionary = corpora.Dictionary(texts)
# 可以查看每个词对应的index
# dictionary.token2id
feature_cnt = len(dictionary.token2id.keys())
# 2、基于词典,将【分词列表集】转换为【稀疏向量集】,也就是【语料库】
corpus = [dictionary.doc2idx(text) for text in texts]# corpus#列表的列表,bow: 每个元素代表位置+频率; idx:每个元素的非零位置
# 3、取出其中2个向量(以稀疏向量形式来表示)
from scipy import sparse
vector = sparse.dok_matrix((1,len(dictionary)), dtype=np.float32)
result = corpus[0]
for i in range(len(result)):
    vector[0,result[i]] = 1
    
vector1 = sparse.dok_matrix((1,len(dictionary)), dtype=np.float32)
result1 = corpus[1]
for i in range(len(result1)):
    vector1[0,result[i]] = 1
# 4、计算二者的余弦相似度
from sklearn.metrics.pairwise import cosine_similarity
sim = cosine_similarity(vector,vector1)

返回的结果sim为:

array([[0.5463583]], dtype=float32)

4. Word2vec模型(每个词形成一个向量)

给定一个文档集合,计算出由神经网络映射出的每个的向量表示(向量的长度自己指定)

from gensim.test.utils import common_texts
from gensim.models import Word2Vec
model = Word2Vec(sentences=common_texts, vector_size=100, window=5, min_count=1, workers=4)# 指定向量长度为100
vector = model.wv['computer']  # get numpy vector of a word
vector1 = model.wv['system']  # get numpy vector of a word
np.dot(vector,vector1)/(np.linalg.norm(vector)*np.linalg.norm(vector1))
>>> 0.21617143
sim = model.wv.most_similar('computer', topn=10)  # get other similar words

返回的结果sim是:

[('system', 0.21617142856121063),
 ('survey', 0.044689204543828964),
 ('interface', 0.015203374437987804),
 ('time', 0.0019510634010657668),
 ('trees', -0.03284314647316933),
 ('human', -0.0742427185177803),
 ('response', -0.09317589551210403),
 ('graph', -0.09575346112251282),
 ('eps', -0.10513807088136673),
 ('user', -0.16911624372005463)]

5. Doc2vec模型(每个词或者句子形成一个向量,可以出现未登录词)

Doc2vec也可以叫做 Paragraph Vector、Sentence Embeddings,它可以获得词、句子、段落和文档的向量表达,是Word2Vec的拓展。

from gensim.test.utils import common_texts
from gensim.models.doc2vec import Doc2Vec, TaggedDocument

documents = [TaggedDocument(doc, [i]) for i, doc in enumerate(common_texts)]
model = Doc2Vec(documents, vector_size=5, window=2, min_count=1, workers=4)# 指定向量长度为5
vector = model.infer_vector(["system", "response"])
vector1 =  model.infer_vector(['human', 'interface', 'computer'])
from scipy import spatial
sim = 1 - spatial.distance.cosine(vector, vector1)

返回的结果sim是:

0.44926005601882935

6. N元模型(n-gram)

相比于词袋模型,N元模型考虑了词与前后词之间的联系,gensim.models.phrases模型可以构建和实现bigram,trigram,quadgram等,提取文档中经常出现的2个词,3个词,4个词。

# 1. 引入数据
df = pd.read_csv('noun_index.csv')
text = df[['text_need']].to_list()
texts = [eval(i) for i in texts]
# 2. 将每个文档的词向量,转换为2元模型的词向量形式(部分词会连接起来,所以总长度会变短)
bigram = models.Phrases(texts)
texts = [bigram[line] for line in texts]
# 3、建立词典、语料库,后续操作和之前tfidf/仅频率/仅出现与否模型的类似了
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
# 4、取出其中2个向量(以稀疏向量形式来表示)
from scipy import sparse
vector = sparse.dok_matrix((1,len(dictionary)), dtype=np.float32)
result = corpus[0]
for i in range(len(result)):
    vector[0,result[i][0]] = result[i][-1]
    
vector1 = sparse.dok_matrix((1,len(dictionary)), dtype=np.float32)
result1 = corpus[1]
for i in range(len(result1)):
    vector1[0,result1[i][0]] = result1[i][-1]
# 5、计算二者的余弦相似度
from sklearn.metrics.pairwise import cosine_similarity
sim = cosine_similarity(vector,vector1)

返回的结果sim为:

array([[0.3840464]], dtype=float32)

附录:gensim.models中的所有模型简介

  1. atmodel#author-topic model basemodel#BaseTopicModel
  2. coherencemodel#Calculate topic coherence for topic models.
  3. doc2vec#word2vec的进阶版
  4. doc2vec_corpusfile
  5. doc2vec_inner
  6. ensemblelda# topic modelling is to find a set of topics that represent the global structure of a corpus of documents.
  7. fasttext# This module allows training word embeddings from a training corpus with the additional ability to obtain word vectors for out-of-vocabulary words.
  8. fasttext_corpusfile
  9. fasttext_inner
  10. hdpmodel# Module for online Hierarchical Dirichlet Processing
  11. keyedvectors# sets of vectors keyed by lookup tokens/ints, and various similarity look-ups.
  12. lda_dispatcher
  13. lda_worker
  14. ldamodel# For a faster implementation of LDA (parallelized for multicore machines), see also gensim.models.ldamulticore.
  15. ldamulticore
  16. ldaseqmodel
  17. logentropy_model#This module allows simple Bag of Words (BoW) represented corpus to be transformed into log entropy space.
  18. lsi_dispatcher lsi_worker lsimodel# Implements fast truncated SVD (Singular Value Decomposition)
  19. nmf# Online Non-Negative Matrix Factorization.
  20. nmf_pgd normmodel# Compute the l1 or l2 normalization
    by normalizing separately for each document in a corpus.
  21. phrases
  22. poincare
  23. rpmodel#For theoretical background on Random Projections
  24. tfidfmodel# vector space bag-of-words models.
  25. translation_matrix
  26. word2vec # the word2vec family of algorithms, include skip-gram and CBOW models
  27. word2vec_corpusfile
  28. word2vec_inner

你可能感兴趣的:(数据挖掘,python,相似度,NLP,自然语言处理,文档相似度计算)