Emergent Mind

Abstract

Self-supervised learning has proven to be an effective way to learn representations in domains where annotated labels are scarce, such as medical imaging. A widely adopted framework for this purpose is contrastive learning and it has been applied to different scenarios. This paper seeks to advance our understanding of the contrastive learning framework by exploring a novel perspective: employing multi-organ datasets for pre-training models tailored to specific organ-related target tasks. More specifically, our target task is breast tumour segmentation in ultrasound images. The pre-training datasets include ultrasound images from other organs, such as the lungs and heart, and large datasets of natural images. Our results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches. Furthermore, our pre-trained models achieve comparable performance when fine-tuned with only half of the available labelled data. Our findings also show the advantages of pre-training on diverse organ data for improving performance in the downstream task.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.