Abstract
There are two prevailing methods for pre-training on large datasets to learn transferable representations: 1) supervised pre-training on large but weakly-labeled datasets; 2) contrastive training on image only and on image-text pairs. While supervised pre-training learns good representations that can be transferred to a wide range of tasks, contrastively trained models such as CLIP have demonstrated unprecedented zero-shot transfer. In this work we compare the transferability of the two aforementioned methods to multiple downstream tasks. The pre-training distributions we consider include YFCC, Conceptual Captions, and ImageNet- 21K while pre-training objectives range from supervised to SimCLR, CLIP, and SLIP. We observe that different pre-training methods with the same training source transfer similarly given their ImageNet accuracy
Originalsprache | englisch |
---|---|
Seitenumfang | 8 |
Publikationsstatus | Veröffentlicht - 23 Juli 2022 |
Veranstaltung | ICML 2022 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward - Baltimore, USA / Vereinigte Staaten Dauer: 23 Juli 2022 → 23 Juli 2022 https://pretraining.github.io/ |
Workshop
Workshop | ICML 2022 Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward |
---|---|
Land/Gebiet | USA / Vereinigte Staaten |
Ort | Baltimore |
Zeitraum | 23/07/22 → 23/07/22 |
Internetadresse |