Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models

Research output: Contribution to conferencePaper

Abstract

Federated Learning (FL) enables collaborative learning among clients via a coordinating server while avoiding direct data sharing, offering a perceived solution to preserve privacy. However, recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data. While local differential privacy (LDP) is widely regarded as a gold standard for privacy protection in data analysis, most studies on MIAs either neglect LDP or fail to provide theoretical guarantees for attack success against LDP-protected data. To address this gap, we derive theoretical lower bounds for the success rates of low-polynomial-time MIAs that exploit vulnerabilities in fully connected or self-attention layers, regardless of the LDP mechanism used. We establish that even when data are protected by LDP, privacy risks persist, depending on the privacy budget. Practical evaluations on models like ResNet and Vision Transformer confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models' utility.
Original languageAmerican English
Number of pages31
StatePublished - 2025
EventInternational Conference on Machine Learning - Vancouver Convention Center
Duration: 13 Jul 202519 Jul 2025

Conference

ConferenceInternational Conference on Machine Learning
CityVancouver Convention Center
Period13/07/2519/07/25

NLR Publication Number

  • NREL/CP-2C00-94904

Keywords

  • differential privacy
  • federated learning
  • membership inference attacks
  • trustworthy AI

Fingerprint

Dive into the research topics of 'Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models'. Together they form a unique fingerprint.

Cite this