Escaping Adversarial Attacks with Egyptian Mirrors

Research output: Contribution to conferencePaperpeer-review

Abstract

Adversarial robustness received significant attention over the past years, due to its critical practical role. Complementary to the existing literature on adversarial training, we explore weight-space ensembles of independently trained models. We propose a defense against adversarial examples which takes advantage of the latest empirical findings on linear mode connectivity of overparameterized models modulo per- mutation invariance. Egyptian Mirrors defense escapes ad- versarial attacks by moving along linear paths between pair- wise aligned functionally diverse models, while frequently and arbitrary changing ensembling direction. We evaluate the proposed defense using adversarial examples generated by FGSM and PGD attacks and show improvements up to 8 % and 33 % test accuracy on 2-layer MLP and VGG11 architec- tures trained on GTSRB and CIFAR10 datasets respectively.
Original languageEnglish
Number of pages6
Publication statusPublished - 6 Oct 2023
Event2nd ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network: MobiCom 2023 - Madrid, Spain
Duration: 2 Oct 20236 Oct 2023
https://fededge2023.github.io

Workshop

Workshop2nd ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network
Abbreviated titleFedEdge
Country/TerritorySpain
CityMadrid
Period2/10/236/10/23
Internet address

Fields of Expertise

  • Information, Communication & Computing

Fingerprint

Dive into the research topics of 'Escaping Adversarial Attacks with Egyptian Mirrors'. Together they form a unique fingerprint.

Cite this