Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised learning of visual features through embedding images into text topic spaces (1705.08631v1)

Published 24 May 2017 in cs.CV

Abstract: End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lluis Gomez (42 papers)
  2. Yash Patel (41 papers)
  3. Marçal Rusiñol (20 papers)
  4. Dimosthenis Karatzas (80 papers)
  5. C. V. Jawahar (110 papers)
Citations (122)

Summary

We haven't generated a summary for this paper yet.