Projekte pro Jahr
Abstract
We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs (Hooker et al.2019), Q-Score (Kalibhat et al., 2022) and PDScore (Baldock et al., 2021) to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned representation is the highest when pruning is introduced early-on in training phase.
Originalsprache | englisch |
---|---|
Seitenumfang | 9 |
Publikationsstatus | Veröffentlicht - 14 Juni 2022 |
Veranstaltung | Sparsity in Neural Networks - Advancing Understanding and Practice: SNN Workshop 2022 - Virtual Dauer: 13 Juli 2022 → 13 Juli 2022 https://www.sparseneural.net/home |
Workshop
Workshop | Sparsity in Neural Networks - Advancing Understanding and Practice |
---|---|
Kurztitel | SNN |
Ort | Virtual |
Zeitraum | 13/07/22 → 13/07/22 |
Internetadresse |
Fingerprint
Untersuchen Sie die Forschungsthemen von „Studying the impact of magnitude pruning on contrastive learning methods“. Zusammen bilden sie einen einzigartigen Fingerprint.Projekte
- 1 Laufend
-
FWF - DENISE - Doktorandenschule für zuverlässige elektronikgestützte Systeme
Mütze, A. (Teilnehmer (Co-Investigator)), Saukh, O. (Teilnehmer (Co-Investigator)), Römer, K. U. (Teilnehmer (Co-Investigator)), Boano, C. A. (Teilnehmer (Co-Investigator)), Corti, F. (Teilnehmer (Co-Investigator)), Schuß, M. (Teilnehmer (Co-Investigator)), Mohamed Hydher, M. H. (Teilnehmer (Co-Investigator)) & Dawara, A. A. (Teilnehmer (Co-Investigator))
1/05/22 → 30/04/26
Projekt: Forschungsprojekt