Studying the impact of magnitude pruning on contrastive learning methods

Francesco Corti*, Rahim Entezari*, Davide Bacciu, Sarah Hooker, Olga Saukh

*Corresponding author for this work

Research output: Contribution to conferencePosterpeer-review

Abstract

We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs (Hooker et al.2019), Q-Score (Kalibhat et al., 2022) and PDScore (Baldock et al., 2021) to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned representation is the highest when pruning is introduced early-on in training phase.
Original languageEnglish
Number of pages9
Publication statusPublished - 14 Jun 2022
EventSparsity in Neural Networks - Advancing Understanding and Practice: SNN Workshop 2022 - Virtual
Duration: 13 Jul 202213 Jul 2022
https://www.sparseneural.net/home

Workshop

WorkshopSparsity in Neural Networks - Advancing Understanding and Practice
Abbreviated titleSNN
CityVirtual
Period13/07/2213/07/22
Internet address

Fingerprint

Dive into the research topics of 'Studying the impact of magnitude pruning on contrastive learning methods'. Together they form a unique fingerprint.

Cite this