python数据分析实例_Python数据分析及可视化实例之词袋word2bow(28)

python数据分析实例_Python数据分析及可视化实例之词袋word2bow(28)_第1张图片

系列文章总目录:Python数据分析及可视化实例目录


python数据分析实例_Python数据分析及可视化实例之词袋word2bow(28)_第2张图片

1.项目背景:

分词用上一期的结巴搞定之后,形成了一个中文列表,

但是计算机不认识汉字,需要转化成向量然后进行分析,

大体上自然语言处理用在:主题获取,文本分类,情感分析等。

一步步来,今天搞定词袋。

2.分析步骤:

(1)找个测试文档,将其分词;

(2)形成字典(词袋);

(3)通过字典对测试字符串进行转换(word2bow)

(4)下一弹:文本相似度。

参考资料:python+gensim︱jieba分词、词袋doc2bow、TFIDF文本挖掘 - CSDN博客

3.源码(公众号:海豹战队):

# coding: utf-8
# 亲,转载即同意帮推公众号:海豹战队,嘿嘿......
# 数据源可关注公众号:海报战队,后留言:数据

# In[1]:

import logging
from gensim import corpora
import re
import jieba
from collections import defaultdict
from pprint import pprint  # pretty-printer
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)


# In[2]:

documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
             "The EPS user interface management system",
             "System and human system engineering testing of EPS",
             "Relation of user perceived response time to error measurement",
             "The generation of random binary unordered trees",
             "The intersection graph of paths in trees",
             "Graph minors IV Widths of trees and well quasi ordering",
             "Graph minors A survey"]


# In[3]:


stoplist = set('for a of the and to in'.split()) # 删除几个简易的停用词,中文上结巴分词哈
texts = [[word for word in document.lower().split() if word not in stoplist]
         for document in documents]
texts


# In[4]:

# 去掉只出现一次的单词
frequency = defaultdict(int)
for text in texts:
    for token in text:
        frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1]
         for text in texts]
texts


# In[5]:

dictionary = corpora.Dictionary(texts) # 将文档存入字典


# In[8]:

get_ipython().magic('pinfo2 dictionary')


# In[6]:

dictionary.token2id   # 单次词频键值对,可以直接用来生成词云


# In[7]:

dictionary.dfs


# In[9]:

dictionary.filter_tokens()


# In[10]:

dictionary.compactify()


# In[12]:

dictionary.save('../../tmp/deerwester.dict')


# In[13]:

# 输出dictionary中个单词的出现频率
def PrintDictionary():
    token2id = dictionary.token2id
    dfs = dictionary.dfs
    token_info = {}
    for word in token2id:
        token_info[word] = dict(
            word = word,
            id = token2id[word],
            freq = dfs[token2id[word]]
        )
    token_items = token_info.values()
    token_items = sorted(token_items, key = lambda x:x['id'])
    print('The info of dictionary: ')
    pprint(token_items)


# In[14]:

# 测试 ditonary的doc2bow功能,转化为one-hot presentation
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec)  # interaction" 没有在字典中,所以忽略了


# In[15]:

# 将文本转化为 doc2bow 形式的数组
corpus = [dictionary.doc2bow(text) for text in texts]


# In[16]:

corpus  # 词带中对应词出现的次数,如(2, 1)代表词带中的2号词出现了1次


# In[17]:

corpora.MmCorpus.serialize('../../tmp/deerwester.mm', corpus)  # 保存至本地
# 除了MmCorpus以外,还有SvmLightCorpus等以各种格式存入磁盘

老鸟可去另一专栏:Python中文社区

新手可查阅历史目录:

yeayee:Python数据分析及可视化实例目录​zhuanlan.zhihu.com
python数据分析实例_Python数据分析及可视化实例之词袋word2bow(28)_第3张图片

python数据分析实例_Python数据分析及可视化实例之词袋word2bow(28)_第4张图片

最后,别只收藏不关注哈

你可能感兴趣的:(python数据分析实例)