Securing Federated Learning Against Active Reconstruction Attacks

Tre Jeter, Truc Nguyen, Raed Alharbi, Jung Taek Seo, My Thai

Research output: Contribution to journalArticlepeer-review

Abstract

Federated Learning (FL) has amassed notable attention for its ability to preserve user privacy while emphasizing the retainment of model training efficiency. Due to this potential, FL has been integrated in many domains, such as healthcare, finance, law, and industrial engineering, where data cannot be easily exchanged due to sensitive information and strict privacy laws. However, current research has indicated that FL protocols are easily compromised by active data reconstruction attacks employed by actively dishonest servers. The malicious modification of global model parameters allows an actively dishonest server to obtain a direct copy of users' private data via gradient inversion. This class of attacks is highly underexplored and continues to be a major challenge due to the intense threat model. In this paper, we propose OASIS as a scalable and modality-agnostic defense based on data augmentation that counteracts active data reconstruction attacks while preserving model performance. To generalize our defense, we uncover the intuition behind gradient inversion that enables these attacks and theoretically establish the conditions by which the defense can be considered robust regardless of attack design. From this, we formulate our defense with data augmentation that illustrates its ability to undermine the attack principle. We evaluate OASIS on five real-world datasets-two image-based (ImageNet and CIFAR100) and three text-based (Wikitext, Stack Overflow, and Shakespeare)-which span diverse uses cases such as vision tasks and language modeling. Comprehensive evaluations on these datasets exhibit the efficacy of OASIS and highlight its feasibility as a solution.
Original languageAmerican English
Number of pages28
JournalACM Transactions on Internet Technology
DOIs
StatePublished - 2025

NREL Publication Number

  • NREL/JA-2C00-94913

Keywords

  • deep neural networks
  • dishonest servers
  • federated learning
  • privacy
  • reconstruction attacks

Fingerprint

Dive into the research topics of 'Securing Federated Learning Against Active Reconstruction Attacks'. Together they form a unique fingerprint.

Cite this