Deep Reinforcement Learning for Microgrid Cost Optimization Considering Load Flexibility

Yansong Pei, Yiyun Yao, Junbo Zhao, Fei Ding, Jiyu Wang

Research output: Contribution to conferencePaper

Abstract

This paper proposes a novel Soft-Actor-Critic (SAC) based Deep Reinforcement Learning (DRL) method for optimizing the cost of microgrid operation by leveraging load flexibility. The proposed SAC-DRL method is designed to coordinate the control of distributed energy resources (DERs) and flexible load, addressing practical energy billing formation by power distribution utilities. Key contributions include an innovative reward function to mitigate sparse reward challenges and a mixed control strategy for discrete and continuous variables, ensuring radial network topology and minimizing power loss. We evaluate the proposed method on the model of a real microgrid located in Southern California, U.S.. The SAC-DRL model is tested to demonstrate its efficacy in reducing grid dependence, optimizing resource use, and minimizing costs. The results highlight the potential of DRL in modern energy systems, offering a sustainable and economically efficient solution for energy management in microgrids.
Original languageAmerican English
Number of pages5
DOIs
StatePublished - 2024
Event2024 IEEE Power & Energy Society General Meeting - Seattle, Washington
Duration: 21 Jul 202425 Jul 2024

Conference

Conference2024 IEEE Power & Energy Society General Meeting
CitySeattle, Washington
Period21/07/2425/07/24

NREL Publication Number

  • NREL/CP-5D00-92053

Keywords

  • deep reinforcement learning
  • microgrid
  • peak load management
  • voltage regulation

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Microgrid Cost Optimization Considering Load Flexibility'. Together they form a unique fingerprint.

Cite this