【SIGIR-AP 2023】A Comparative Study of Training Objectives for Clarification Facet Generation

前言

介绍一下我们的第一个工作,该工作被SIGIR-AP 2023接收。

  • 本文主要关注澄清式用户意图生成,我们对两种已有的生成方式(seq-pred, sep-min-perm)进行了研究,指出其中的弊端,并提出了三种新的生成方式(set-pred, seq-avg-perm, seq-set-pred),分析了五种生成方式的优点和缺点,为澄清式用户意图生成提供指导。
  • 五种生成方式,其实就五种训练+推理方式
  • 文章链接: A Comparative Study of Training Objectives for Clarification Facet Generation

摘要:

Duetotheambiguityandvaguenessofauserquery,itisessential to identify the query facets for the clarification of user intents. Existing work on query facet generation has achieved compelling performance by sequentially predicting the next facet given previously generated facets based on pre-trained language generation models such as BART.Givenaquery, there are mainly two types of training objectives to guide the facet generation models. One is to generate the default sequence of ground-truth facets, and the other is to enumerate all the permutations of ground-truth facets and use the sequence that has the minimum loss for model updates. The second is permutation-invariant while the first is not. In this paper, we aim to conduct a systematic comparative study of various types of training objectives, with different properties of not only whether it is permutation-invariant but also whether it conducts sequential prediction and whether it can control the count of output facets. To this end, we propose another three training objectives of different aforementioned properties. For comprehensive comparisons, besides the commonly used evaluation that measures the matching with ground-truth facets, we also introduce two diversity metrics to measure the diversity of the generated facets. Based on an open-domain query facet dataset, i.e., MIMICS, we conduct extensive analyses and showthe pros and cons of each method, which could shed light on model training for clarification facet generation.

你可能感兴趣的:(自然语言处理,论文阅读,信息检索,论文阅读,人工智能,语言模型)