Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Neural Causal Models with Active Interventions (2109.02429v2)

Published 6 Sep 2021 in stat.ML and cs.LG

Abstract: Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science. The appealing properties of neural networks have recently led to a surge of interest in differentiable neural network-based methods for learning causal structures from data. So far, differentiable causal discovery has focused on static datasets of observational or fixed interventional origin. In this work, we introduce an active intervention targeting (AIT) method which enables a quick identification of the underlying causal structure of the data-generating process. Our method significantly reduces the required number of interactions compared with random intervention targeting and is applicable for both discrete and continuous optimization formulations of learning the underlying directed acyclic graph (DAG) from data. We examine the proposed method across multiple frameworks in a wide range of settings and demonstrate superior performance on multiple benchmarks from simulated to real-world data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Nino Scherrer (16 papers)
  2. Olexa Bilaniuk (10 papers)
  3. Yashas Annadani (17 papers)
  4. Anirudh Goyal (93 papers)
  5. Patrick Schwab (27 papers)
  6. Bernhard Schölkopf (412 papers)
  7. Michael C. Mozer (38 papers)
  8. Yoshua Bengio (601 papers)
  9. Stefan Bauer (102 papers)
  10. Nan Rosemary Ke (40 papers)
Citations (38)

Summary

We haven't generated a summary for this paper yet.