Abstract
Modernization of energy systems has led to in-creased interactions among multiple critical infrastructures and diverse stakeholders making the challenge of operational decision making more complex and at times beyond cognitive capabilities of human operators. The state-of-the-art machine learning and deep learning approaches show promise of supporting users with complex decision-making challenges, such as those occur-ring in our rapidly transforming cyber-physical energy systems. However, successful adoption of data-driven decision support technology for critical infrastructure will be dependent on the ability of these technologies to be trustworthy and contextu-ally interpretable. In this paper, we investigate the feasibility of implementing explainable artificial intelligence (XAI) for interpretable detection of cyberattacks in the energy system. Leveraging a proof-of-concept simulation use case of detection of a data falsification attack on a photovoltaic system using XGBoost algorithm, we demonstrate how Local Interpretable Model-Agnostic Explanations (LIME), a flavor XAI approach, can help provide contextual and actionable interpretation of cyberattack detection.
Original language | American English |
---|---|
Number of pages | 5 |
DOIs | |
State | Published - 2024 |
Event | International Conference on Computing, Networking and Communications (ICNC 2024) - Big Island, Hawaii, USA Duration: 19 Feb 2024 → 22 Feb 2024 |
Conference
Conference | International Conference on Computing, Networking and Communications (ICNC 2024) |
---|---|
City | Big Island, Hawaii, USA |
Period | 19/02/24 → 22/02/24 |
NREL Publication Number
- NREL/CP-5T00-90743
Keywords
- computational modeling
- critical infrastructure
- data models
- decision making
- deep learning
- explainable AI
- photovoltaic systems