Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 69 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 439 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

FusionMind -- Improving question and answering with external context fusion (2401.00388v1)

Published 31 Dec 2023 in cs.CL

Abstract: Answering questions using pre-trained LMs and knowledge graphs (KGs) presents challenges in identifying relevant knowledge and performing joint reasoning.We compared LMs (fine-tuned for the task) with the previously published QAGNN method for the Question-answering (QA) objective and further measured the impact of additional factual context on the QAGNN performance. The QAGNN method employs LMs to encode QA context and estimate KG node importance, and effectively update the question choice entity representations using Graph Neural Networks (GNNs). We further experimented with enhancing the QA context encoding by incorporating relevant knowledge facts for the question stem. The models are trained on the OpenbookQA dataset, which contains ~6000 4-way multiple choice questions and is widely used as a benchmark for QA tasks. Through our experimentation, we found that incorporating knowledge facts context led to a significant improvement in performance. In contrast, the addition of knowledge graphs to LLMs resulted in only a modest increase. This suggests that the integration of contextual knowledge facts may be more impactful for enhancing question answering performance compared to solely adding knowledge graphs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. “EDM3: Event Detection as Multi-task Text Generation” In ArXiv abs/2305.16357, 2023 URL: https://api.semanticscholar.org/CorpusID:258947208
  2. “Instruction Tuned Models are Quick Learners” In ArXiv abs/2306.05539, 2023 URL: https://api.semanticscholar.org/CorpusID:259129868
  3. “TarGEN: Targeted Data Generation with Large Language Models” In ArXiv abs/2310.17876, 2023 URL: https://api.semanticscholar.org/CorpusID:264555527
  4. “Context-NER : Contextual Phrase Generation at Scale”, 2021 URL: https://api.semanticscholar.org/CorpusID:265039305
  5. “KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning” In CoRR abs/1909.02151, 2019 arXiv: http://arxiv.org/abs/1909.02151
  6. “RoBERTa: A Robustly Optimized BERT Pretraining Approach” In CoRR abs/1907.11692, 2019 arXiv: http://arxiv.org/abs/1907.11692
  7. “Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering”, 2018 arXiv:1809.02789 [cs.CL]
  8. “LongBoX: Evaluating Transformers on Long-Sequence Clinical Tasks” In ArXiv abs/2311.09564, 2023 URL: https://api.semanticscholar.org/CorpusID:265221168
  9. “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer” In CoRR abs/1910.10683, 2019 arXiv: http://arxiv.org/abs/1910.10683
  10. “SQuAD: 100,000+ Questions for Machine Comprehension of Text”, 2016 arXiv:1606.05250 [cs.CL]
  11. “InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis” In ArXiv abs/2302.08624, 2023 URL: https://api.semanticscholar.org/CorpusID:257020097
  12. Robyn Speer, Joshua Chin and Catherine Havasi “ConceptNet 5.5: An Open Multilingual Graph of General Knowledge”, 2018 arXiv:1612.03975 [cs.CL]
  13. “Improving Natural Language Inference Using External Knowledge in the Science Questions Domain” In CoRR abs/1809.05724, 2018 arXiv: http://arxiv.org/abs/1809.05724
  14. Zhen Wang “Modern Question Answering Datasets and Benchmarks: A Survey”, 2022 arXiv:2206.15030 [cs.CL]
  15. “Fusing Context Into Knowledge Graph for Commonsense Reasoning” In CoRR abs/2012.04808, 2020 arXiv: https://arxiv.org/abs/2012.04808
  16. “QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering” In CoRR abs/2104.06378, 2021 arXiv: https://arxiv.org/abs/2104.06378
  17. “Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering”, 2021 arXiv:2101.00774 [cs.AI]

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube