Audio-Visual Instance Discrimination

Multimodal Video Analysis Workshop at ECCV

Abstract

We present a self-supervised approach to learn audio-visual representations from video. Our method uses contrastive learning for cross-modal discrimination of video from audio and vice versa. We show that optimizing for cross-modal discrimination, rather than within-modal discrimination, is important to learn good representations from video and audio. With this simple but powerful insight, our method achieves state-of-the-art results when finetuned on action recognition tasks.

Featured Publications