Abstract
This paper focuses on the critical load restoration problem in distribution systems following major outages. To provide fast online response and optimal sequential decision-making support, a reinforcement learning (RL) based approach is proposed to optimize the restoration. Due to the complexities stemming from the large policy search space, renewable uncertainty, and nonlinearity in a complex grid control problem, directly applying RL algorithms to train a satisfactory policy requires extensive tuning to be successful. To address this challenge, this paper leverages the curriculum learning (CL) technique to design a training curriculum involving a simpler steppingstone problem that guides the RL agent to learn to solve the original hard problem in a progressive and more efficient manner. We demonstrate that compared with direct learning, CL facilitates controller training to achieve better performance. In the experiments, to study realistic scenarios where renewable forecasts used for decision-making are in general imperfect, the trained RL controllers are compared with two model predictive controllers (MPCs) using renewable forecasts with different error levels and observe how these controllers can hedge against the uncertainty. Results show that RL controllers are less susceptible to forecast errors than the baseline MPCs and can provide a more reliable restoration process.
Original language | American English |
---|---|
Number of pages | 11 |
Journal | IEEE Transactions on Smart Grid |
DOIs | |
State | Published - 2022 |
Bibliographical note
See NREL/JA-2C00-84156 for paper as published in IEEE Transactions on Power SystemsNREL Publication Number
- NREL/JA-2C00-81125
Keywords
- critical load restoration
- curriculum learning
- distribution system
- grid resilience
- reinforcement learning