Restoring Distribution System Under Renewable Uncertainty Using Reinforcement Learning

Xiangyu Zhang, Abinet Eseye, Bernard Knueven, Wesley Jones

Research output: Contribution to conferencePaperpeer-review

8 Scopus Citations

Abstract

Distributed energy resources (DERs) in distribution systems, including renewable generation, micro-turbine, and energy storage, can be used to restore critical loads following extreme events to increase grid resiliency. However, properly coordinating multiple DERs in the system for multi-step restoration process under renewable uncertainty and fuel availability is a complicated sequential optimal control problem. Due to its capability to handle system non-linearity and uncertainty, reinforcement learning (RL) stands out as a potentially powerful candidate in solving complex sequential control problems. Moreover, the offline training of RL provides excellent action readiness during online operation, making it suitable to problems such as load restoration, where in-time, correct and coordinated actions are needed. In this study, a distribution system prioritized load restoration based on a simplified single-bus system is studied: with imperfect renewable generation forecast, the performance of an RL controller is compared with that of a deterministic model predictive control (MPC). Our experiment results show that the RL controller is able to learn from experience, adapt to the imperfect forecast information and provide a more reliable restoration process when compared with the baseline controller.

Conference

Conference2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, SmartGridComm 2020
Country/TerritoryUnited States
CityTempe
Period11/11/2013/11/20

Bibliographical note

See NREL/CP-2C00-77116 for preprint

NREL Publication Number

  • NREL/CP-2C00-79160

Keywords

  • grid resiliency
  • load restoration
  • micro grid
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Restoring Distribution System Under Renewable Uncertainty Using Reinforcement Learning'. Together they form a unique fingerprint.

Cite this