Abstract
Background/Objectives: The resilience of safety-critical systems is gaining importance due to the rise in cyber and physical threats, especially within critical infrastructure. Traditional static resilience metrics may not capture dynamic system states, leading to inaccurate assessments and ineffective responses to cyber threats. This work aims to develop a data-driven, adaptive method for resilience metric learning. Methods: We propose a data-driven approach using inverse reinforcement learning (IRL) to learn a single, adaptive resilience metric. The method infers a reward function from expert control actions. Unlike previous approaches using static weights or fuzzy logic, this work applies adversarial inverse reinforcement learning (AIRL), training a generator and discriminator in parallel to learn the reward structure and derive an optimal policy. Results: The proposed approach is evaluated on multiple scenarios: optimal communication network rerouting, power distribution network reconfiguration, and cyber-physical restoration of critical loads using the IEEE 123-bus system. Conclusions: The adaptive, learned resilience metric enables faster critical load restoration in comparison to conventional RL approaches.
Original language | American English |
---|---|
Number of pages | 28 |
Journal | AI (Switzerland) |
Volume | 6 |
Issue number | 5 |
DOIs | |
State | Published - 2025 |
NREL Publication Number
- NREL/JA-5R00-86269
Keywords
- cyber-physical systems
- inverse reinforcement learning
- reconfiguration
- reinforcement learning
- resilience