Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Zero-shot Causal Graph Extrapolation from Text via LLMs (2312.14670v1)

Published 22 Dec 2023 in cs.AI

Abstract: We evaluate the ability of LLMs to infer causal relations from natural language. Compared to traditional natural language processing and deep learning techniques, LLMs show competitive performance in a benchmark of pairwise relations without needing (explicit) training samples. This motivates us to extend our approach to extrapolating causal graphs through iterated pairwise queries. We perform a preliminary analysis on a benchmark of biomedical abstracts with ground-truth causal graphs validated by experts. The results are promising and support the adoption of LLMs for such a crucial step in causal inference, especially in medical domains, where the amount of scientific text to analyse might be huge, and the causal statements are often implicit.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Large Language Models for Biomedical Causal Graph Construction. arXiv preprint arXiv:2301.12473.
  2. On Pearl’s hierarchy and the foundations of causal inference. In Probabilistic and causal inference: the works of judea pearl, 507–556. Association for Computing Machinery.
  3. Chickering, D. M. 2002. Optimal structure identification with greedy search. Journal of machine learning research, 3(Nov): 507–554.
  4. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. arXiv preprint arXiv:1911.10422.
  5. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE transactions on neural networks and learning systems, 33(2): 494–514.
  6. Can Large Language Models Infer Causation from Correlation? arXiv preprint arXiv:2306.05836.
  7. A survey on text classification: From traditional to deep learning. ACM Transactions on Intelligent Systems and Technology (TIST), 13(2): 1–41.
  8. Fulminant type 1 diabetes: a comprehensive review of an autoimmune condition. Diabetes/Metabolism Research and Reviews, 36(6): e3317.
  9. Large language models and knowledge graphs: Opportunities and challenges. arXiv preprint arXiv:2308.06374.
  10. Pearl, J. 2009. Causality. Cambridge university press.
  11. Rehder, B. 2017. Reasoning with causal cycles. Cognitive science, 41: 944–1002.
  12. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–7.
  13. Deep learning in medical image analysis. Annual review of biomedical engineering, 19: 221–248.
  14. Towards expert-level medical question answering with large language models. arXiv preprint arXiv:2305.09617.
  15. Causation, prediction, and search. MIT press.
  16. Comparative study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702.01923.
  17. Approximating counterfactual bounds while fusing observational, biased and randomised data sources. International Journal of Approximate Reasoning, 162: 109023.
  18. Understanding causality with large language models: Feasibility and opportunities. arXiv preprint arXiv:2304.05524.
Citations (10)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.