Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 156 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Reinforcement Learning for Conversational Question Answering over Knowledge Graph (2401.08460v1)

Published 16 Jan 2024 in cs.AI and cs.CL

Abstract: Conversational question answering (ConvQA) over law knowledge bases (KBs) involves answering multi-turn natural language questions about law and hope to find answers in the law knowledge base. Despite many methods have been proposed. Existing law knowledge base ConvQA model assume that the input question is clear and can perfectly reflect user's intention. However, in real world, the input questions are noisy and inexplict. This makes the model hard to find the correct answer in the law knowledge bases. In this paper, we try to use reinforcement learning to solve this problem. The reinforcement learning agent can automatically learn how to find the answer based on the input question and the conversation history, even when the input question is inexplicit. We test the proposed method on several real world datasets and the results show the effectivenss of the proposed model.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)