论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen

Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentiment Classification

动机

  • where dialog act and sentiment can indicate the explicit and the implicit intentions separately. SC can detect the sentiments in utterances which can help to capture speakers’implicit intentions.
    论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第1张图片
  • 认为两个信息很重要,上下文信息和交互信息,之前的方法要么考虑一个信息,要么是使用pipeline的形式,单独建模。
    论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第2张图片

Related Work

  • 1、上图a coling 2018:Multi-task dialog act and sentiment recognition on Mastodon
    • We manually annotate both dialogues and sentiments on this corpus, and train a multi-task hierarchical recurrent network – joint learning
    • can implicitly extract the shared mutual interaction information, but fail to effectively capture the contextual information

论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第3张图片

  • 2、上图b PR期刊,只考虑上下文的信息 Integrated neural network model for identifying speech acts, predicators, and sentiments of dialogue utterances
    • explicitly leverage the previous act information to guide the current DA prediction
    • the model ignores the mutual interaction information

数据中的一个例子:
论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第4张图片

论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第5张图片

  • 3、2020 AAAI DCR-Net
    • capture the contextual information, followed then by a relation layer to consider the mutual interaction information.
    • Pipeline way:two info model independently

论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第6张图片

Contributions

  • first attempt to simultaneously incorporate contextual information and mutual interaction information
  • propose a co-interactive graph attention network where a cross-tasks connection and cross-utterances connection

Method

论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第7张图片

  • Speaker-Level Encoder , 使用GNN考虑同一个说话的人的信息。边,如果是同一个人,为1
  • Stacked Co-Interactive Graph Layer ,2N个结点,2N*2N条边。
    • Cross-utt info
    • Cross-tasks info
      论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第8张图片

Experiments

论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第9张图片
论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第10张图片

论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第11张图片

论文笔记-Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentimen_第12张图片

问题

  • 本质上来说,该做法还是依赖数据中的某些规律。
  • 跨任务图的计算还是一种全连接的方式,有边就是1,否则就是0;怎么更好建模一阶邻居?

你可能感兴趣的:(Seq2Seq,NLP,Dialogue,System,人工智能,自然语言处理,机器学习)