Improved Word Representation Learning with Sememes

Topic: Word Representation
Dataset: Sogou-T, HowNet

  • 1889 distinct sememes
  • 2.4 average senses for each word
  • 1.6 average sememes for each sense
  • 42.2% of words have multiple senses.

Methodology:
Sememe-Encoded Word Representation Learning(SE WRL)
This framework regards each word sense as a combination of its sememes, and iteratively performs word sense disambiguation according to their contexts and learn representations of sememes, senses and words by extending Skip-gram in word2vec (Mikolov et al., 2013)

Improved Word Representation Learning with Sememes_第1张图片
Word, sense, sememe

A). Simple Sememe Aggregation Model
For each word, SSA considers all sememes in all senses of the word together, and represents the target word using the average of all its sememe embeddings.
简单的sememe聚合模型在低频词上能有更好的表现,因为在传统skipgram模型中低频词不能得到很好的训练,然而在SSA中通过sememe embeddings 低频词被解码为sememe并通过其他词得到良好的训练。

B). Sememe Attention over Context Model
The SSA Model replaces the target word embedding with the aggregated sememe embeddings to encode sememe information into word representa- tion learning. However, each word in SSA model still has only one single representation in different contexts, which cannot deal with polysemy of most words. It is intuitive that we should construct distinct embeddings for a target word according to specific contexts, with the favor of word sense annotation in HowNet.

Improved Word Representation Learning with Sememes_第2张图片
attention

每一个context word拥有一个attention weight,attention weight 由目标词w和sense向量之间的相关度,其中sense向量由其组成sememe向量的平均值表示

Improved Word Representation Learning with Sememes_第3张图片
Sememe Attention over Context Model

C). Sememe Attention over Target Model
The process can also be applied to select appropriate senses for the target word, by taking context words as attention.

Improved Word Representation Learning with Sememes_第4张图片
attention

对于Context Model,只有一个target word用来学习context words 的sense权重;
对于Target Model,使用多个context words 来联合学习target word 的sense 权重。
因而Target Model能够产生更好的语义去模糊化结果,产生更准确的语义表示。

Improved Word Representation Learning with Sememes_第5张图片
Sememe Attention over Target Model

你可能感兴趣的:(Improved Word Representation Learning with Sememes)