SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction (2406.10962v1)
Abstract: Explainable machine learning (XML) has emerged as a major challenge in AI. Although black-box models such as Deep Neural Networks and Gradient Boosting often exhibit exceptional predictive accuracy, their lack of interpretability is a notable drawback, particularly in domains requiring transparency and trust. This paper tackles this core AI problem by proposing a novel method to enhance explainability with minimal accuracy loss, using a Mixture of Linear Models (MLM) estimated under the co-supervision of black-box models. We have developed novel methods for estimating MLM by leveraging AI techniques. Specifically, we explore two approaches for partitioning the input space: agglomerative clustering and decision trees. The agglomerative clustering approach provides greater flexibility in model construction, while the decision tree approach further enhances explainability, yielding a decision tree model with linear or logistic regression models at its leaf nodes. Comparative analyses with widely-used and state-of-the-art predictive models demonstrate the effectiveness of our proposed methods. Experimental results show that statistical models can significantly enhance the explainability of AI, thereby broadening their potential for real-world applications. Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.