How well do contrastively trained models transfer?

M. Moein Shariatnia, Rahim Entezari, Mitchell Wortsman, Olga Saukh, Ludwig Schmidt

Publikation: KonferenzbeitragPaperBegutachtung

Abstract

There are two prevailing methods for pre-training on large datasets to learn transferable representations: 1) supervised pre-training on large but weakly-labeled datasets; 2) contrastive training on image only and on image-text pairs. While supervised pre-training learns good representations that can be transferred to a wide range of tasks, contrastively trained models such as CLIP have demonstrated unprecedented zero-shot transfer. In this work we compare the transferability of the two aforementioned methods to multiple downstream tasks. The pre-training distributions we consider include YFCC, Conceptual Captions, and ImageNet- 21K while pre-training objectives range from supervised to SimCLR, CLIP, and SLIP. We observe that different pre-training methods with the same training source transfer similarly given their ImageNet accuracy
Originalspracheenglisch
Seitenumfang8
PublikationsstatusVeröffentlicht - 23 Juli 2022
VeranstaltungICML 2022 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward - Baltimore, USA / Vereinigte Staaten
Dauer: 23 Juli 202223 Juli 2022
https://pretraining.github.io/

Workshop

WorkshopICML 2022 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward
Land/GebietUSA / Vereinigte Staaten
OrtBaltimore
Zeitraum23/07/2223/07/22
Internetadresse

Dieses zitieren