Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

KLDivNet: An unsupervised neural network for multi-modality image registration (1908.08767v2)

Published 23 Aug 2019 in cs.CV

Abstract: Multi-modality image registration is one of the most underlined processes in medical image analysis. Recently, convolutional neural networks (CNNs) have shown significant potential in deformable registration. However, the lack of voxel-wise ground truth challenges the training of CNNs for an accurate registration. In this work, we propose a cross-modality similarity metric, based on the KL-divergence of image variables, and implement an efficient estimation method using a CNN. This estimation network, referred to as KLDivNet, can be trained unsupervisedly. We then embed the KLDivNet into a registration network to achieve the unsupervised deformable registration for multi-modality images. We employed three datasets, i.e., AAL Brain, LiTS Liver and Hospital Liver, with both the intra- and inter-modality image registration tasks for validation. Results showed that our similarity metric was effective, and the proposed registration network delivered superior performance compared to the state-of-the-art methods.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.