Emergent Mind

Towards a Learning Theory of Cause-Effect Inference

(1502.02398)
Published Feb 9, 2015 in stat.ML , math.PR , math.ST , and stat.TH

Abstract

We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection ${(Si,li)}{i=1}n$, where each $Si$ is a sample drawn from the probability distribution of $Xi \times Yi$, and $li$ is a binary label indicating whether "$Xi \to Yi$" or "$Xi \leftarrow Yi$". Given these data, we build a causal inference rule in two steps. First, we featurize each $Si$ using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.