Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Adapting tree-based multiple imputation methods for multi-level data? A simulation study (2401.14161v2)

Published 25 Jan 2024 in stat.AP and stat.ML

Abstract: When data have a hierarchical structure, such as students nested within classrooms, ignoring dependencies between observations can compromise the validity of imputation procedures. Standard tree-based imputation methods implicitly assume independence between observations, limiting their applicability in multilevel data settings. Although Multivariate Imputation by Chained Equations (MICE) is widely used for hierarchical data, it has limitations, including sensitivity to model specification and computational complexity. Alternative tree-based approaches have shown promise for individual-level data, but remain largely unexplored for hierarchical contexts. In this simulation study, we systematically evaluate the performance of novel tree-based methods--Chained Random Forests and Extreme Gradient Boosting (mixgb)--explicitly adapted for multi-level data by incorporating dummy variables indicating cluster membership. We compare these tree-based methods and their adapted versions with traditional MICE imputation in terms of coefficient estimation bias, type I error rates and statistical power, under different cluster sizes, missingness mechanisms and missingness rates, using both random intercept and random slope data generation models. The results show that MICE provides robust and accurate inference for level 2 variables, especially at low missingness rates. However, the adapted boosting approach (mixgb with cluster dummies) consistently outperforms other methods for Level-1 variables at higher missingness rates (30%, 50%). For level 2 variables, while MICE retains better power at moderate missingness (30%), adapted boosting becomes superior at high missingness (50%), regardless of the missingness mechanism or cluster size. These findings highlight the potential of appropriately adapted tree-based imputation methods as effective alternatives to conventional MICE in multilevel data analyses.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com