Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 204 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

An explainability framework for cortical surface-based deep learning (2203.08312v1)

Published 15 Mar 2022 in q-bio.NC, cs.CV, and q-bio.QM

Abstract: The emergence of explainability methods has enabled a better comprehension of how deep neural networks operate through concepts that are easily understood and implemented by the end user. While most explainability methods have been designed for traditional deep learning, some have been further developed for geometric deep learning, in which data are predominantly represented as graphs. These representations are regularly derived from medical imaging data, particularly in the field of neuroimaging, in which graphs are used to represent brain structural and functional wiring patterns (brain connectomes) and cortical surface models are used to represent the anatomical structure of the brain. Although explainability techniques have been developed for identifying important vertices (brain areas) and features for graph classification, these methods are still lacking for more complex tasks, such as surface-based modality transfer (or vertex-wise regression). Here, we address the need for surface-based explainability approaches by developing a framework for cortical surface-based deep learning, providing a transparent system for modality transfer tasks. First, we adapted a perturbation-based approach for use with surface data. Then, we applied our perturbation-based method to investigate the key features and vertices used by a geometric deep learning model developed to predict brain function from anatomy directly on a cortical surface model. We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.