SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos

Publication
In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

We propose a novel self-supervised embedding to learn how actions sound from narrated in-the-wild egocentric videos. Whereas existing methods rely on curated data with known audio-visual correspondence, our multimodal contrastive-consensus coding (MC3) embedding reinforces the associations between audio, language, and vision when all modality pairs agree, while diminishing those associations when any one pair does not. We show our approach can successfully discover how the long tail of human actions sound from egocentric video, outperforming an array of recent multimodal embedding techniques on two datasets (Ego4D and EPIC-Sounds) and multiple cross-modal tasks.

Rohit Girdhar
Rohit Girdhar
Research Scientist

My current research focuses on understanding and generating multimodal data, using minimal human supervision