Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Explainable Machine Learning Model for Early Detection of Parkinson's Disease using LIME on DaTscan Imagery (2008.00238v1)

Published 1 Aug 2020 in cs.CV, cs.LG, and eess.IV

Abstract: Parkinson's disease (PD) is a degenerative and progressive neurological condition. Early diagnosis can improve treatment for patients and is performed through dopaminergic imaging techniques like the SPECT DaTscan. In this study, we propose a machine learning model that accurately classifies any given DaTscan as having Parkinson's disease or not, in addition to providing a plausible reason for the prediction. This is kind of reasoning is done through the use of visual indicators generated using Local Interpretable Model-Agnostic Explainer (LIME) methods. DaTscans were drawn from the Parkinson's Progression Markers Initiative database and trained on a CNN (VGG16) using transfer learning, yielding an accuracy of 95.2%, a sensitivity of 97.5%, and a specificity of 90.9%. Keeping model interpretability of paramount importance, especially in the healthcare field, this study utilises LIME explanations to distinguish PD from non-PD, using visual superpixels on the DaTscans. It could be concluded that the proposed system, in union with its measured interpretability and accuracy may effectively aid medical workers in the early diagnosis of Parkinson's Disease.

Citations (170)

Summary

  • The paper proposes an explainable machine learning model leveraging a CNN (VGG16 with transfer learning) and LIME on DaTscan images for early Parkinson's Disease detection.
  • The model achieved 95.2% accuracy, 97.5% sensitivity, and 90.9% specificity in classifying PD from DaTscan images using the PPMI dataset.
  • The implementation of LIME provides visual explanations in DaTscan images, enhancing trust and clinical utility by showing which regions contribute to the classification decision.

An Explainable Machine Learning Model for Early Detection of Parkinson's Disease using LIME on DaTscan Imagery

This paper presents a machine learning approach to enable the early detection of Parkinson's Disease (PD) through the analysis of SPECT DaTscan images, utilizing Local Interpretable Model-Agnostic Explainer (LIME) methods for explainable predictions. The paper leverages a Convolutional Neural Network (CNN) architecture based on VGG16, incorporating transfer learning to facilitate robust classification performance. Given the degenerative nature of PD, early diagnosis is crucial for effective management and intervention, hence this research focuses on enhancing diagnostic efficiency while maintaining high interpretability, a critical requirement in healthcare applications.

The dataset for this project is drawn from the Parkinson's Progression Markers Initiative (PPMI), consisting of 642 DaTscan images separated into PD and non-PD classes. The model achieves an impressive 95.2% accuracy in classifying the scans, coupled with a sensitivity of 97.5%, and specificity of 90.9%, thereby demonstrating its potential utility in clinical settings. Furthermore, the paper revises class decision thresholds to optimize prediction metrics, illustrating an effective balance between false positive and false negative rates.

With the implementation of LIME for interpretability, the model provides visual indicators that highlight significant regions within DaTscan images that contribute to classification decisions. Such an approach underpins the model’s practicality, offering medical practitioners valuable insights into the automated classification process and reinforcing trust in AI-assisted diagnostic tools.

The implications of this research are manifold, contributing to both theoretical developments in explainable AI and practical advancements in the domain of medical imaging diagnostics. While the model provides a reliable method for early PD detection, further validation in live clinical environments remains essential. The research underscores the importance of interpretability in AI applications, advocating for systems that prioritize transparent decision-making processes, particularly when handling sensitive medical data.

Future efforts should aim at expanding dataset size and demographic diversity, alleviating possible class imbalance issues, and exploring alternative deep learning architectures for enhanced performance. The refinement of hyperparameters and expanded data inclusion (3D scans instead of individual slices) may further optimize model functionality. Additionally, the incorporation of unsupervised learning methodologies could bolster the understanding of complex image patterns and contribute to improved diagnostic capabilities.

In conclusion, this paper presents a compelling case for the integration of explainable AI in healthcare, combining impressive classification performance with the necessary interpretability to foster confident clinical use. It demonstrates a pivotal step forward in the development of sophisticated diagnostic tools, with substantial potential to advance the treatment and management of Parkinson's Disease.