paper-conference

The effectiveness of MAE pre-pretraining for billion-scale pretraining
Scaling up MAE pre-pretraining, followed by weakly supervised pretraining, leads to strong representations.
Omnivore: A Single Model for Many Visual Modalities
A single model for images, video and single-view 3D.