This paper examines the impact of static sparsity on the robustness of a trained network to weigh perturbations, data corruption, and adversarial examples. We show that, up to a certain sparsity achieved by increasing network width and depth while keeping the network capacity fixed, sparsified networks consistently match and often outperform their initially dense versions. Robustness and accuracy decline simultaneously for very high sparsity due to loose connectivity between network layers. Our findings show that a rapid robustness drop caused by network compression observed in the literature is due to a reduced net-work capacity rather than sparsity.
|Publication status||Published - 8 Jul 2021|
|Event||Sparsity in Neural Networks - Advancing Understanding and Practice: SNN Workshop 2021 - Virtual|
Duration: 8 Jul 2021 → 9 Jul 2021
|Workshop||Sparsity in Neural Networks - Advancing Understanding and Practice|
|Period||8/07/21 → 9/07/21|