ARM-IRL: Adaptive Resilience Metric Quantification Using Inverse Reinforcement Learning: Article No. 103

Research output: Contribution to journalArticlepeer-review

Abstract

Background/Objectives: The resilience of safety-critical systems is gaining importance due to the rise in cyber and physical threats, especially within critical infrastructure. Traditional static resilience metrics may not capture dynamic system states, leading to inaccurate assessments and ineffective responses to cyber threats. This work aims to develop a data-driven, adaptive method for resilience metric learning. Methods: We propose a data-driven approach using inverse reinforcement learning (IRL) to learn a single, adaptive resilience metric. The method infers a reward function from expert control actions. Unlike previous approaches using static weights or fuzzy logic, this work applies adversarial inverse reinforcement learning (AIRL), training a generator and discriminator in parallel to learn the reward structure and derive an optimal policy. Results: The proposed approach is evaluated on multiple scenarios: optimal communication network rerouting, power distribution network reconfiguration, and cyber-physical restoration of critical loads using the IEEE 123-bus system. Conclusions: The adaptive, learned resilience metric enables faster critical load restoration in comparison to conventional RL approaches.
Original languageAmerican English
Number of pages28
JournalAI (Switzerland)
Volume6
Issue number5
DOIs
StatePublished - 2025

NREL Publication Number

  • NREL/JA-5R00-86269

Keywords

  • cyber-physical systems
  • inverse reinforcement learning
  • reconfiguration
  • reinforcement learning
  • resilience

Fingerprint

Dive into the research topics of 'ARM-IRL: Adaptive Resilience Metric Quantification Using Inverse Reinforcement Learning: Article No. 103'. Together they form a unique fingerprint.

Cite this