Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAVA: Label-efficient Visual Learning and Adaptation (2210.10317v1)

Published 19 Oct 2022 in cs.CV

Abstract: We present LAVA, a simple yet effective method for multi-domain visual transfer learning with limited data. LAVA builds on a few recent innovations to enable adapting to partially labelled datasets with class and domain shifts. First, LAVA learns self-supervised visual representations on the source dataset and ground them using class label semantics to overcome transfer collapse problems associated with supervised pretraining. Secondly, LAVA maximises the gains from unlabelled target data via a novel method which uses multi-crop augmentations to obtain highly robust pseudo-labels. By combining these ingredients, LAVA achieves a new state-of-the-art on ImageNet semi-supervised protocol, as well as on 7 out of 10 datasets in multi-domain few-shot learning on the Meta-dataset. Code and models are made available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Islam Nassar (4 papers)
  2. Munawar Hayat (73 papers)
  3. Ehsan Abbasnejad (59 papers)
  4. Hamid Rezatofighi (61 papers)
  5. Mehrtash Harandi (108 papers)
  6. Gholamreza Haffari (141 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.