Abstract
Federated learning (FL) has emerged as a powerful framework for training deep learning models across numerous distributed clients, where a central server distributes and aggregates model updates without accessing clients' raw data, ensuring privacy. However, this decentralized approach introduces significant challenges regarding fairness. Existing research shows that non-i.i.d. data distributions among clients can lead to biased global models, particularly affecting decisions based on sensitive attributes. This paper aims to investigate these fairness issues rigorously by analyzing how disparities in client data distribution impact model fairness. We conduct experiments using various models and datasets to evaluate and mitigate biases. We also propose novel data partitioning strategies to reflect real-world client data distributions accurately. Our finding shows that FL is highly vulnerable to the fairness problem due to the Non-IID property. The experiment's code can be found here: https://github.com/raed19/Questionable-Fairness-in-Federated-Learning.
Original language | American English |
---|---|
Pages | 31-36 |
Number of pages | 6 |
DOIs | |
State | Published - 2025 |
Event | 8th International Conference on Data Science and Machine Learning (CDMA2025) - Riyadh, KSA Duration: 16 Feb 2025 → 17 Feb 2025 |
Conference
Conference | 8th International Conference on Data Science and Machine Learning (CDMA2025) |
---|---|
City | Riyadh, KSA |
Period | 16/02/25 → 17/02/25 |
NREL Publication Number
- NREL/CP-2C00-93183
Keywords
- fairness
- federated learning
- machine learning