Studying the impact of magnitude pruning on contrastive learning methods

Francesco Corti*, Rahim Entezari*, Davide Bacciu, Sarah Hooker, Olga Saukh

*Korrespondierende/r Autor/-in für diese Arbeit

Publikation: KonferenzbeitragPosterBegutachtung

Abstract

We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs (Hooker et al.2019), Q-Score (Kalibhat et al., 2022) and PDScore (Baldock et al., 2021) to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned representation is the highest when pruning is introduced early-on in training phase.
Originalspracheenglisch
Seitenumfang9
PublikationsstatusVeröffentlicht - 14 Juni 2022
VeranstaltungSparsity in Neural Networks - Advancing Understanding and Practice: SNN Workshop 2022 - Virtual
Dauer: 13 Juli 202213 Juli 2022
https://www.sparseneural.net/home

Workshop

WorkshopSparsity in Neural Networks - Advancing Understanding and Practice
KurztitelSNN
OrtVirtual
Zeitraum13/07/2213/07/22
Internetadresse

Fingerprint

Untersuchen Sie die Forschungsthemen von „Studying the impact of magnitude pruning on contrastive learning methods“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren