Abstract
Swift and reliable critical load restoration (CLR) can help make a distribution system resilient towards extreme events. To optimally achieve that, alongside practical concerns such as limiting online computational burden, some studies leverage model-free reinforcement learning (RL) to train control policies. Despite the advantages provided by RL algorithms, these approaches suffer from two issues: 1) the lack of a proper mechanism for constraint enforcement, and 2) poor sample efficiency. Therefore, in this paper, a primal-dual differentiable programming (PDDP) method is developed for guiding the training leading to a constraint-satisfying policy. Additionally, the model-based nature of the proposed method aims at improving sample efficiency. The experiment on a CLR problem demonstrates that PDDP can effectively train a control policy that both achieves desirable performance and satisfies required constraints.
Original language | American English |
---|---|
Number of pages | 8 |
State | Published - 2023 |
Event | 2023 IEEE Power & Energy Society General Meeting - Orlando, Florida Duration: 16 Jul 2023 → 20 Jul 2023 |
Conference
Conference | 2023 IEEE Power & Energy Society General Meeting |
---|---|
City | Orlando, Florida |
Period | 16/07/23 → 20/07/23 |
NREL Publication Number
- NREL/CP-2C00-84635
Keywords
- differentiable programming
- grid resilience
- load restoration
- primal-dual method
- reinforcement learning