Questionable Fairness in Federated Learning

Raed Alharbi, Truc Nguyen, Nooshin Yousefzadeh, Ahmed Aljohani

Research output: Contribution to conferencePaper

Abstract

Federated learning (FL) has emerged as a powerful framework for training deep learning models across numerous distributed clients, where a central server distributes and aggregates model updates without accessing clients' raw data, ensuring privacy. However, this decentralized approach introduces significant challenges regarding fairness. Existing research shows that non-i.i.d. data distributions among clients can lead to biased global models, particularly affecting decisions based on sensitive attributes. This paper aims to investigate these fairness issues rigorously by analyzing how disparities in client data distribution impact model fairness. We conduct experiments using various models and datasets to evaluate and mitigate biases. We also propose novel data partitioning strategies to reflect real-world client data distributions accurately. Our finding shows that FL is highly vulnerable to the fairness problem due to the Non-IID property. The experiment's code can be found here: https://github.com/raed19/Questionable-Fairness-in-Federated-Learning.
Original languageAmerican English
Pages31-36
Number of pages6
DOIs
StatePublished - 2025
Event8th International Conference on Data Science and Machine Learning (CDMA2025) - Riyadh, KSA
Duration: 16 Feb 202517 Feb 2025

Conference

Conference8th International Conference on Data Science and Machine Learning (CDMA2025)
CityRiyadh, KSA
Period16/02/2517/02/25

NREL Publication Number

  • NREL/CP-2C00-93183

Keywords

  • fairness
  • federated learning
  • machine learning

Fingerprint

Dive into the research topics of 'Questionable Fairness in Federated Learning'. Together they form a unique fingerprint.

Cite this