Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Retrieve Engaging Follow-Up Queries (2302.10978v1)

Published 21 Feb 2023 in cs.CL, cs.AI, cs.IR, and cs.LG

Abstract: Open domain conversational agents can answer a broad range of targeted queries. However, the sequential nature of interaction with these systems makes knowledge exploration a lengthy task which burdens the user with asking a chain of well phrased questions. In this paper, we present a retrieval based system and associated dataset for predicting the next questions that the user might have. Such a system can proactively assist users in knowledge exploration leading to a more engaging dialog. The retrieval system is trained on a dataset which contains ~14K multi-turn information-seeking conversations with a valid follow-up question and a set of invalid candidates. The invalid candidates are generated to simulate various syntactic and semantic confounders such as paraphrases, partial entity match, irrelevant entity, and ASR errors. We use confounder specific techniques to simulate these negative examples on the OR-QuAC dataset and develop a dataset called the Follow-up Query Bank (FQ-Bank). Then, we train ranking models on FQ-Bank and present results comparing supervised and unsupervised approaches. The results suggest that we can retrieve the valid follow-ups by ranking them in higher positions compared to confounders, but further knowledge grounding can improve ranking performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Christopher Richardson (8 papers)
  2. Sudipta Kar (19 papers)
  3. Anjishnu Kumar (5 papers)
  4. Anand Ramachandran (5 papers)
  5. Omar Zia Khan (3 papers)
  6. Zeynab Raeesy (6 papers)
  7. Abhinav Sethy (14 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.