nltk

安装

pip install nltk

安装nltk组件

import nltk
nltk.download()

运行后会出现一个下载界面,点击all可以下载全部,看Status可知道状态,not install 代表未安装,out of date代表下载超时,partial代表正在下载,安装了一部分,installed代表安装完成。下面有一个红色的进度条可查看安装进度。

下载很慢(听说要下载两天!)


nltk_download.png

1.语料库(Corpus) - 文本的正文,单数。Corpora 是它的复数。示例:A collection of medical journals。
2.词库(Lexicon) - 词汇及其含义。例如:英文字典。但是,考虑到各个领域会有不同的词库。例如:对于金融投资者来说,Bull(牛市)这个词的第一个含义是对市场充满信心的人,与“普通英语词汇”相比,这个词的第一个含义是动物。因此,金融投资者,医生,儿童,机械师等都有一个特殊的词库。
3.标记(Token) - 每个“实体”都是根据规则分割的一部分。例如,当一个句子被“拆分”成单词时,每个单词都是一个标记。如果您将段落拆分为句子,则每个句子也可以是一个标记。

“预处理”就是将单词转换为数值或信号模式。将数据转换成计算机可以理解的东西。预处理的主要形式之一就是过滤掉无用的数据。在自然语言处理中,无用词(数据)被称为停止词。

from nltk.tokenize import sent_tokenize, word_tokenize

EXAMPLE_TEXT = "Hello Mr. Smith, how are you doing today? The weather is great, and Python is awesome. The sky is pinkish-blue. You shouldn't eat cardboard."
#按句子分割
print(sent_tokenize(EXAMPLE_TEXT))

运行中出现这种错误可安装punkt

Resource punkt not found.
  Please use the NLTK Downloader to obtain the resource:

  >>> import nltk
  >>> nltk.download('punkt')

运行结果为(句子列表)

['Hello Mr. Smith, how are you doing today?', 'The weather is great, and Python is awesome.', 'The sky is pinkish-blue.', "You shouldn't eat cardboard."]
print(word_tokenize(EXAMPLE_TEXT))
#运行结果(分词)
['Hello', 'Mr.', 'Smith', ',', 'how', 'are', 'you', 'doing', 'today', '?', 'The', 'weather', 'is', 'great', ',', 'and', 'Python', 'is', 'awesome', '.', 'The', 'sky', 'is', 'pinkish-blue', '.', 'You', 'should', "n't", 'eat', 'cardboard', '.']

1.注意标点符号被视为一个单独的标记
2.注意单词shouldn't分隔为should和n't
3.pinkish-blue被当作“一个词”

查看停用词

from nltk.corpus import stopwords
set(stopwords.words('english'))
#运行结果
{'a', 'about','above','after', 'again','against', 'ain', 'all', 'am', 'an', 'and','any', 'are', 'aren', "aren't",...}

删除文本中的停止词

from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

example_sent = "This is a sample sentence, showing off the stop words filtration."

stop_words = set(stopwords.words('english'))

word_tokens = word_tokenize(example_sent)

filtered_sentence = [w for w in word_tokens if not w in stop_words]
#两种方法均可,可选其一
#filtered_sentence = []
#for w in word_tokens:
#   if w not in stop_words:
#       filtered_sentence.append(w)

print(word_tokens)
print(filtered_sentence)

输出

['This', 'is', 'a', 'sample', 'sentence', ',', 'showing', 'off', 'the', 'stop', 'words', 'filtration', '.']
['This', 'sample', 'sentence', ',', 'showing', 'stop', 'words', 'filtration', '.']

数据预处理的另一种形式是“词干提取(Stemming)”。
在英语中,一个单词常常是另一个单词的“变种”,如:happy=>happiness,这里happy叫做happiness的词干(stem)。在信息检索系统中,我们常常做的一件事,就是在Term规范化过程中,提取词干(stemming),即除去英文单词分词变换形式的结尾。
我们使用词干提取算法之一--------Porter(波特词干算法)

from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize

ps = PorterStemmer()

使用单词提取词干

example_words = ["python","pythoner","pythoning","pythoned","pythonly"]
for w in example_words:
    print(ps.stem(w))

输出

python
python
python
python
pythonli

使用句子提取词干

new_text = "It is important to by very pythonly while you are pythoning with python. All pythoners have pythoned poorly at least once."
words = word_tokenize(new_text)

for w in words:
    print(ps.stem(w))

输出

It
is
import
to
by
veri
pythonli
while
you
are
python
with
python
.
all
python
have
python
poorli
at
least
onc
.

NLTK 词性标注
把一个句子中的单词标注为名词,形容词,动词等称为‘词性标注’

POS tag list:

CC  coordinating conjunction
CD  cardinal digit
DT  determiner
EX  existential there (like: "there is" ... think of it like "there exists")
FW  foreign word
IN  preposition/subordinating conjunction
JJ  adjective   'big'
JJR adjective, comparative  'bigger'
JJS adjective, superlative  'biggest'
LS  list marker 1)
MD  modal   could, will
NN  noun, singular 'desk'
NNS noun plural 'desks'
NNP proper noun, singular   'Harrison'
NNPS    proper noun, plural 'Americans'
PDT predeterminer   'all the kids'
POS possessive ending   parent's
PRP personal pronoun    I, he, she
PRP$    possessive pronoun  my, his, hers
RB  adverb  very, silently,
RBR adverb, comparative better
RBS adverb, superlative best
RP  particle    give up
TO  to  go 'to' the store.
UH  interjection    errrrrrrrm
VB  verb, base form take
VBD verb, past tense    took
VBG verb, gerund/present participle taking
VBN verb, past participle   taken
VBP verb, sing. present, non-3d take
VBZ verb, 3rd person sing. present  takes
WDT wh-determiner   which
WP  wh-pronoun  who, what
WP$ possessive wh-pronoun   whose
WRB wh-abverb   where, when
import nltk
from nltk.corpus import stopwords
from nltk.corpus import brown
import numpy as np
 
#分词
text = "Sentiment analysis is a challenging subject in machine learning.\
 People express their emotions in language that is often obscured by sarcasm,\
  ambiguity, and plays on words, all of which could be very misleading for \
  both humans and computers.".lower()
text_list = nltk.word_tokenize(text)
#去掉标点符号
english_punctuations = [',', '.', ':', ';', '?', '(', ')', '[', ']', '&', '!', '*', '@', '#', '$', '%']
text_list = [word for word in text_list if word not in english_punctuations]
#去掉停用词
stops = set(stopwords.words("english"))
text_list = [word for word in text_list if word not in stops]
#词性标注
nltk.pos_tag(text_list)

输出

[('sentiment', 'NN'),
 ('analysis', 'NN'),
 ('challenging', 'VBG'),
 ('subject', 'JJ'),
 ('machine', 'NN'),
 ('learning', 'VBG'),
 ('people', 'NNS'),
 ('express', 'JJ'),
 ('emotions', 'NNS'),
 ('language', 'NN'),
 ('often', 'RB'),
 ('obscured', 'VBD'),
 ('sarcasm', 'JJ'),
 ('ambiguity', 'NN'),
 ('plays', 'NNS'),
 ('words', 'NNS'),
 ('could', 'MD'),
 ('misleading', 'VB'),
 ('humans', 'NNS'),
 ('computers', 'NNS')]

详细内容参见:https://www.jianshu.com/p/0e1d51a7549d
https://blog.csdn.net/zhuzuwei/article/details/79008816

你可能感兴趣的:(nltk)