Escaping Adversarial Attacks with Egyptian Mirrors

Publikation: KonferenzbeitragPaperBegutachtung

Abstract

Adversarial robustness received significant attention over the past years, due to its critical practical role. Complementary to the existing literature on adversarial training, we explore weight-space ensembles of independently trained models. We propose a defense against adversarial examples which takes advantage of the latest empirical findings on linear mode connectivity of overparameterized models modulo per- mutation invariance. Egyptian Mirrors defense escapes ad- versarial attacks by moving along linear paths between pair- wise aligned functionally diverse models, while frequently and arbitrary changing ensembling direction. We evaluate the proposed defense using adversarial examples generated by FGSM and PGD attacks and show improvements up to 8 % and 33 % test accuracy on 2-layer MLP and VGG11 architec- tures trained on GTSRB and CIFAR10 datasets respectively.
Originalspracheenglisch
Seitenumfang6
PublikationsstatusVeröffentlicht - 6 Okt. 2023
Veranstaltung2nd ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network: MobiCom 2023 - Madrid, Spanien
Dauer: 2 Okt. 20236 Okt. 2023
https://fededge2023.github.io

Workshop

Workshop2nd ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network
KurztitelFedEdge
Land/GebietSpanien
OrtMadrid
Zeitraum2/10/236/10/23
Internetadresse

Fields of Expertise

  • Information, Communication & Computing

Fingerprint

Untersuchen Sie die Forschungsthemen von „Escaping Adversarial Attacks with Egyptian Mirrors“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren