统计语言模型Python实现

文章目录

  • 原理简述
    • N-gram
    • unigram
    • bigram
    • Add-k Smoothing
  • 代码&步骤
    • 1、工具导入
    • 2、语料预处理
    • 3、unigram
    • 4、bigram
    • 5、概率计算
  • 基于Bigram的文本生成
  • 附录

原理简述

统计语言模型(Statistical Language Model),可用于计算一个句子的合理程度。
S S S 表示句子,由有序的 n n n 个词 w 1 , w 2 , w 3 , . . w n w_1,w_2,w_3,..w_n w1,w2,w3,..wn 组成,句子概率 P ( S ) P(S) P(S) 的计算公式如下:

N-gram

P ( S ) = P ( w 1 , w 2 , . . . w n ) = P ( w 1 ) P ( w 2 ∣ w 1 ) P ( w 3 ∣ w 1 , w 2 ) . . . P ( w n ∣ w 1 , w 2 , . . . w n − 1 ) P(S) =P(w_1,w_2,...w_n) =P(w_1)P(w_2|w_1)P(w_3|w_1,w_2)...P(w_n|w_1,w_2,...w_{n-1}) P(S)=P(w1,w2,...wn)=P(w1)P(w2w1)P(w3w1,w2)...P(wnw1,w2,...wn1)

unigram

P ( S ) = P ( w 1 ) P ( w 2 ) . . . P ( w n ) = ∏ i = 1 n P ( w i ) P(S)=P(w_1)P(w_2)...P(w_n)=\prod^n_{i=1}P(w_i) P(S)=P(w1)P(w2)...P(wn)=i=1nP(wi)
log ⁡ P ( S ) = ∑ i = 1 n log ⁡ P ( w i ) \log P(S) = \sum^n_{i=1} \log P(w_i) logP(S)=i=1nlogP(wi)

bigram

P ( S ) = P ( w 1 ) P ( w 2 ∣ w 1 ) P ( w 3 ∣ w 2 ) . . . P ( w n ∣ w n − 1 ) = P ( w 1 ) ∏ i = 2 n P ( w i ∣ w i − 1 ) P(S) =P(w_1)P(w_2|w_1)P(w_3|w_2)...P(w_n|w_{n-1}) =P(w_1)\prod^n_{i=2}P(w_i|w_{i-1}) P(S)=P(w1)P(w2w1)P(w3w2)...P(wnwn1)=P(w1)i=2nP(wiwi1)
log ⁡ P ( S ) = log ⁡ P ( w 1 ) + ∑ i = 2 n log ⁡ P ( w i ∣ w i − 1 ) \log P(S) = \log P(w_1) + \sum^n_{i=2} \log P(w_i|w_{i-1}) logP(S)=logP(w1)+i=2nlogP(wiwi1)

Add-k Smoothing

k=1;bigram;C表示count
P A d d − 1 ( w i ∣ w i − 1 ) = C ( w i − 1 , w i ) + 1 C ( w i − 1 ) + V P_{Add-1}(w_i|w_{i-1})=\frac{C(w_{i-1},w_i)+1}{C(w_{i-1})+V} PAdd1(wiwi1)=C(wi1)+VC(wi1,wi)+1
e.g.

我很帅
她很美
P ( 帅 ∣ 很 ) = 1 + 1 2 + 5 = 2 7 P(帅|很)=\frac{1+1}{2+5}=\frac{2}{7} P()=2+51+1=72
P ( 美 ∣ 很 ) = 1 + 1 2 + 5 = 2 7 P(美|很)=\frac{1+1}{2+5}=\frac{2}{7} P()=2+51+1=72
P ( 我 ∣ 很 ) = 0 + 1 2 + 5 = 1 7 P(我|很)=\frac{0+1}{2+5}=\frac{1}{7} P()=2+50+1=71
P ( 她 ∣ 很 ) = 0 + 1 2 + 5 = 1 7 P(她|很)=\frac{0+1}{2+5}=\frac{1}{7} P()=2+50+1=71
P ( 很 ∣ 很 ) = 0 + 1 2 + 5 = 1 7 P(很|很)=\frac{0+1}{2+5}=\frac{1}{7} P()=2+50+1=71
P 总 = 1 P_总=1 P=1

代码&步骤

from collections import Counter
import numpy as np


"""语料"""
corpus = '''她的菜很好 她的菜很香 她的他很好 他的菜很香 他的她很好
很香的菜 很好的她 很菜的他 她的好 菜的香 他的菜 她很好 他很菜 菜很好'''.split()


"""语料预处理"""
counter = Counter()  # 词频统计
for sentence in corpus:
    for word in sentence:
        counter[word] += 1
counter = counter.most_common()
lec = len(counter)
word2id = {counter[i][0]: i for i in range(lec)}
id2word = {i: w for w, i in word2id.items()}


"""N-gram建模训练"""
unigram = np.array([i[1] for i in counter]) / sum(i[1] for i in counter)

bigram = np.zeros((lec, lec)) + 1e-8
for sentence in corpus:
    sentence = [word2id[w] for w in sentence]
    for i in range(1, len(sentence)):
        bigram[[sentence[i - 1]], [sentence[i]]] += 1
for i in range(lec):
    bigram[i] /= bigram[i].sum()


"""句子概率"""
def prob(sentence):
    s = [word2id[w] for w in sentence]
    les = len(s)
    if les < 1:
        return 0
    p = unigram[s[0]]
    if les < 2:
        return p
    for i in range(1, les):
        p *= bigram[s[i - 1], s[i]]
    return p

print('很好的菜', prob('很好的菜'))
print('菜很好的', prob('菜很好的'))
print('菜好的很', prob('菜好的很'))


"""排列组合"""
def permutation_and_combination(ls_ori, ls_all=None):
    ls_all = ls_all or [[]]
    le = len(ls_ori)
    if le == 1:
        ls_all[-1].append(ls_ori[0])
        ls_all.append(ls_all[-1][: -2])
        return ls_all
    for i in range(le):
        ls, lsi = ls_ori[:i] + ls_ori[i + 1:], ls_ori[i]
        ls_all[-1].append(lsi)
        ls_all = permutation_and_combination(ls, ls_all)
    if ls_all[-1]:
        ls_all[-1].pop()
    else:
        ls_all.pop()
    return ls_all

print('123排列组合', permutation_and_combination([1, 2, 3]))


"""给定词组,返回最大概率组合的句子"""
def max_prob(words):
    pc = permutation_and_combination(words)  # 生成排列组合
    p, w = max((prob(s), s) for s in pc)
    return p, ''.join(w)

print(*max_prob(list('香很的菜')))
print(*max_prob(list('好很的他菜')))
print(*max_prob(list('好很的的她菜')))

1、工具导入

from collections import Counter
import numpy as np, pandas as pd
pdf = lambda data, index=None, columns=None: pd.DataFrame(data, index, columns)

pandas用于可视化(Jupyter下),亦可用matplotlib、seaborn等工具

2、语料预处理

corpus = '她很香 她很菜 她很好 他很菜 他很好 菜很好'.split()

counter = Counter()  # 词频统计
for sentence in corpus:
    for word in sentence:
        counter[word] += 1
counter = counter.most_common()
words = [wc[0] for wc in counter]  # 词库(用于可视化)
lec = len(counter)
word2id = {counter[i][0]: i for i in range(lec)}
id2word = {i: w for w, i in word2id.items()}

pdf(counter, None, ['word', 'freq'])

3、unigram

unigram = np.array([i[1] for i in counter]) / sum(i[1] for i in counter)
pdf(unigram.reshape(1, lec), ['概率'], words)

4、bigram

bigram = np.zeros((lec, lec)) + 1e-8  # 平滑

for sentence in corpus:
    sentence = [word2id[w] for w in sentence]
    for i in range(1, len(sentence)):
        bigram[[sentence[i - 1]], [sentence[i]]] += 1

# 频数
pd.DataFrame(bigram, words, words, int)

统计语言模型Python实现_第1张图片

# 频数 --> 概率
for i in range(lec):
    bigram[i] /= bigram[i].sum()
pdf(bigram, words, words)

统计语言模型Python实现_第2张图片

5、概率计算

def prob(sentence):
    s = [word2id[w] for w in sentence]
    les = len(s)
    if les < 1:
        return 0
    p = unigram[s[0]]
    if les < 2:
        return p
    for i in range(1, les):
        p *= bigram[s[i - 1], s[i]]
    return p

print(prob('菜很香'), 1 / 6 / 6)

p ( 菜 很 香 ) = P ( 菜 ) P ( 很 ∣ 菜 ) P ( 香 ∣ 很 ) = 3 18 ∗ 1 ∗ 1 6 p(菜很香)=P(菜)P(很|菜)P(香|很)= \frac{3}{18} * 1 * \frac{1}{6} p()=P()P()P()=183161

基于Bigram的文本生成

https://github.com/AryeYellow/NLP/blob/master/TextGeneration/tg_trigram_and_cnn.ipynb

from collections import Counter
from random import choice
from jieba import lcut

# 语料读取、分词
with open('corpus.txt', encoding='utf-8') as f:
    corpus = [lcut(line) for line in f.read().strip().split()]

# 词频统计
counter = Counter(word for words in corpus for word in words)

# N-gram建模训练
bigram = {w: Counter() for w in counter.keys()}
for words in corpus:
    for i in range(1, len(words)):
        bigram[words[i - 1]][words[i]] += 1
for k, v in bigram.items():
    total2 = sum(v.values())
    v = {w: c / total2 for w, c in v.items()}
    bigram[k] = v

# 文本生成
n = 5  # 开放度
while True:
    first = input('首字:').strip()
    if first not in counter:
        first = choice(list(counter))
    next_words = sorted(bigram[first], key=lambda w: bigram[first][w])[:n]
    next_word = choice(next_words) if next_words else ''
    sentence = first + next_word
    while bigram[next_word]:
        next_word = choice(sorted(bigram[next_word], key=lambda w: bigram[next_word][w])[:n])
        sentence += next_word
    print(sentence)

附录

en cn
grammar 语法
unique 唯一的
binary 二元的
permutation 排列
combination 组合

统计语言模型应用于词性标注,Python算法实现

你可能感兴趣的:(自然语言处理)