Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAVA: Language Audio Vision Alignment for Contrastive Video Pre-Training (2207.08024v1)

Published 16 Jul 2022 in cs.CV

Abstract: Generating representations of video data is of key importance in advancing the field of machine perception. Most current techniques rely on hand-annotated data, which can be difficult to work with, expensive to generate, and hard to scale. In this work, we propose a novel learning approach based on contrastive learning, LAVA, which is capable of learning joint language, audio, and video representations in a self-supervised manner. We pre-train LAVA on the Kinetics 700 dataset using transformer encoders to learn representations for each modality. We then demonstrate that LAVA performs competitively with the current state-of-the-art self-supervised and weakly-supervised pretraining techniques on UCF-101 and HMDB-51 video action recognition while using a fraction of the unlabeled data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sumanth Gurram (1 paper)
  2. Andy Fang (1 paper)
  3. David Chan (24 papers)
  4. John Canny (44 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.