Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A Conditional Independence Test in the Presence of Discretization (2404.17644v6)

Published 26 Apr 2024 in stat.ML, cs.AI, and cs.LG

Abstract: Testing conditional independence has many applications, such as in Bayesian network learning and causal discovery. Different test methods have been proposed. However, existing methods generally can not work when only discretized observations are available. Specifically, consider $X_1$, $\tilde{X}_2$ and $X_3$ are observed variables, where $\tilde{X}_2$ is a discretization of latent variables $X_2$. Applying existing test methods to the observations of $X_1$, $\tilde{X}_2$ and $X_3$ can lead to a false conclusion about the underlying conditional independence of variables $X_1$, $X_2$ and $X_3$. Motivated by this, we propose a conditional independence test specifically designed to accommodate the presence of such discretization. To achieve this, we design the bridge equations to recover the parameter reflecting the statistical information of the underlying latent continuous variables. An appropriate test statistic and its asymptotic distribution under the null hypothesis of conditional independence have also been derived. Both theoretical results and empirical validation have been provided, demonstrating the effectiveness of our test methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. Local causal and markov blanket induction for causal discovery and feature selection for classification part i: algorithms and empirical evaluation. Journal of Machine Learning Research, 11(1), 2010.
  2. Bipartite graphs and their applications, volume 131. Cambridge university press, 1998.
  3. Partial correlation and conditional correlation as measures of conditional independence. Australian & New Zealand Journal of Statistics, 46(4):657–664, 2004a.
  4. Partial correlation and conditional correlation as measures of conditional independence. Australian & New Zealand Journal of Statistics, 46(4):657–664, 2004b.
  5. Chandler Squires. causaldag: creation, manipulation, and learning of causal models, 2018. URL https://github.com/uhlerlab/causaldag.
  6. A simple algorithm to construct a consistent extension of a partially oriented graph. 1992. URL https://api.semanticscholar.org/CorpusID:122949140.
  7. A permutation-based kernel conditional independence test. In UAI, pp.  132–141, 2014.
  8. High dimensional semiparametric latent graphical model for mixed data. Journal of the Royal Statistical Society Series B: Statistical Methodology, 79(2):405–421, 2017.
  9. Fisher, R. A. On the ”Probable Error” of a Coefficient of Correlation Deduced from a Small Sample. Metron, 1:3–32, 1921.
  10. F.R.S., K. P. X. on the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical Magazine Series 1, 50:157–175, 2009. URL https://api.semanticscholar.org/CorpusID:121472089.
  11. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research, 5(Jan):73–99, 2004.
  12. Federated causal discovery from heterogeneous data, 2024.
  13. Margaritis, D. Distribution-free learning of bayesian network structure in continuous domains. In AAAI, volume 5, pp.  825–830, 2005.
  14. Structured learning of gaussian graphical models. Advances in neural information processing systems, 25, 2012.
  15. Bayesian inference of multiple gaussian graphical models. Journal of the American Statistical Association, 110(509):159–174, 2015.
  16. Asymptotic normality and optimalities in estimation of large gaussian graphical models. 2015.
  17. Model-powered conditional independence test. Advances in neural information processing systems, 30, 2017.
  18. Directlingam: A direct method for learning a linear non-gaussian structural equation model, 2011.
  19. Causation, Prediction, and Search. MIT press, 2nd edition, 2000.
  20. Approximate kernel-based conditional independence tests for fast non-parametric causal discovery. Journal of Causal Inference, 7(1):20180017, 2019.
  21. A nonparametric hellinger metric test for conditional independence. Econometric Theory, 24(4):829–864, 2008.
  22. Vaart, A. W. v. d. Stochastic Convergence, pp.  5–24. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998a.
  23. Vaart, A. W. v. d. M–and Z-Estimators, pp.  41–84. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998b. doi: 10.1017/CBO9780511802256.006.
  24. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1):19–35, 2007.
  25. Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775, 2012.
  26. An improved iamb algorithm for markov blanket discovery. J. Comput., 5(11):1755–1761, 2010.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 5 likes.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube