Abstract
Federated Learning (FL) has garnered significant attention for its potential to protect user privacy while enhancing model training efficiency. For that reason, FL has found its use in various domains, from health care to industrial engineering, especially where data cannot be easily exchanged due to sensitive information or privacy laws. However, recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks executed by dishonest servers. These attacks involve the malicious modification of global model parameters, allowing the server to obtain a verbatim copy of users' private data by inverting their gradient updates. Tackling this class of attack remains a crucial challenge due to the strong threat model. In this paper, we propose a defense mechanism, namely OASIS, based on image augmentation that effectively counteracts active reconstruction attacks while preserving model performance. We first uncover the core principle of gradient inversion that enables these attacks and theoretically identify the main conditions by which the defense can be robust regardless of the attack strategies. We then construct our defense with image augmentation showing that it can undermine the attack principle. Comprehensive evaluations demonstrate the efficacy of the defense mechanism highlighting its feasibility as a solution.
Original language | American English |
---|---|
Pages | 1004-1015 |
Number of pages | 12 |
DOIs | |
State | Published - 2024 |
Event | IEEE International Conference on Distributed Computing Systems - Jersey City, New Jersey, USA Duration: 16 Jul 2024 → 19 Jul 2024 |
Conference
Conference | IEEE International Conference on Distributed Computing Systems |
---|---|
City | Jersey City, New Jersey, USA |
Period | 16/07/24 → 19/07/24 |
NREL Publication Number
- NREL/CP-2C00-88824
Keywords
- deep neural networks
- dishonest servers
- federated learning
- privacy
- reconstruction attack