Abstract
Federated machine learning (FL) is gaining significant popularity to develop cybersecurity solutions in power grids because of its advanced capability to support decentralized data handing at local devices, its privacy preservation, and its low-bandwidth requirement. However, the evolving adversarial machine learning (AML) threats raise significant concerns for the cybersecurity of FL architectures. The FL-based split neural network (SplitNN) achieves high performance through the decentralized training of local neural network models while preserving data privacy across multiple entities. In this paper, we propose a methodology for evaluating the performance of a vertical FLbased anomaly detector against different types of AML attacks, including denial-of-service attacks, adversarial data injection attacks, and replay attacks on the trained local models deployed in the grid network. For a case study, we consider the modified IEEE 13-bus system, and we develop SplitNN-based binary and multiclass classification models to detect, locate, and identify different types of data integrity attacks on the volt-watt control with two pooling layers: maximum pooling and AvgPool. Our experimental results, computed through performance metrics, reveal that the severity of these AML attacks varies with the integrated pooling mechanism, the type of classification model, and the nature of the cyberattack. Further, the AML attacks negatively impacted the prediction time per sample for the pretrained SplitNN during the online testing.
Original language | American English |
---|---|
Number of pages | 11 |
State | Published - 2024 |
Event | Resilience Week 2024 - Austin Texas Duration: 3 Dec 2024 → 5 Dec 2024 |
Conference
Conference | Resilience Week 2024 |
---|---|
City | Austin Texas |
Period | 3/12/24 → 5/12/24 |
NREL Publication Number
- NREL/CP-5T00-89951
Keywords
- adversarial threats
- cybersecurity
- federated machine learning
- power grid
- split neural net- work