A Guide to Feature Importance Methods for Scientific Inference (2404.12862v2)
Abstract: While ML models are increasingly used due to their high predictive power, their use in understanding the data-generating process (DGP) is limited. Understanding the DGP requires insights into feature-target associations, which many ML models cannot directly provide due to their opaque internal mechanisms. Feature importance (FI) methods provide useful insights into the DGP under certain conditions. Since the results of different FI methods have different interpretations, selecting the correct FI method for a concrete use case is crucial and still requires expert knowledge. This paper serves as a comprehensive guide to help understand the different interpretations of global FI methods. Through an extensive review of FI methods and providing new proofs regarding their interpretation, we facilitate a thorough understanding of these methods and formulate concrete recommendations for scientific inference. We conclude by discussing options for FI uncertainty estimation and point to directions for future research aiming at full statistical inference from black-box ML models.
- Cover, T.M.: Elements of Information Theory. John Wiley & Sons (1999)
- Friedman, J.H.: Greedy Function Approximation: A Gradient Boosting Machine. Annals of statistics pp. 1189–1232 (2001)
- Lones, M.A.: How to Avoid Machine Learning Pitfalls: A Guide for Academic Researchers. arXiv preprint arXiv:2108.02497 (2021)
- Pearl, J.: Causality. Cambridge University Press (2009)
- Soboĺ, I.: Sensitivity Estimates for Nonlinear Mathematical Models. Math. Model. Comput. Exp. 1 (1993)