Adversarially robust spiking neural networks through conversion

Ozan Özdenizci, Robert Legenstein

Publikation: ArbeitspapierPreprint

Abstract

Spiking neural networks (SNNs) provide an energy-efficient alternative to a variety of artificial neural network (ANN) based AI applications. As the progress in neuromorphic computing with SNNs expands their use in applications, the problem of adversarial robustness of SNNs becomes more pronounced. To the contrary of the widely explored end-to-end adversarial training based solutions, we address the limited progress in scalable robust SNN training methods by proposing an adversarially robust ANN-to-SNN conversion algorithm. Our method provides an efficient approach to embrace various computationally demanding robust learning objectives that have been proposed for ANNs. During a post-conversion robust finetuning phase, our method adversarially optimizes both layer-wise firing thresholds and synaptic connectivity weights of the SNN to maintain transferred robustness gains from the pre-trained ANN. We perform experimental evaluations in numerous adaptive adversarial settings that account for the spike-based operation dynamics of SNNs, and show that our approach yields a scalable state-of-the-art solution for adversarially robust deep SNNs with low-latency.
Originalspracheenglisch
HerausgeberarXiv
Seitenumfang20
PublikationsstatusVeröffentlicht - 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „Adversarially robust spiking neural networks through conversion“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren