Abstract
This paper proposes a novel Soft-Actor-Critic (SAC) based Deep Reinforcement Learning (DRL) method for optimizing the cost of microgrid operation by leveraging load flexibility. The proposed SAC-DRL method is designed to coordinate the control of distributed energy resources (DERs) and flexible load, addressing practical energy billing formation by power distribution utilities. Key contributions include an innovative reward function to mitigate sparse reward challenges and a mixed control strategy for discrete and continuous variables, ensuring radial network topology and minimizing power loss. We evaluate the proposed method on the model of a real microgrid located in Southern California, U.S.. The SAC-DRL model is tested to demonstrate its efficacy in reducing grid dependence, optimizing resource use, and minimizing costs. The results highlight the potential of DRL in modern energy systems, offering a sustainable and economically efficient solution for energy management in microgrids.
Original language | American English |
---|---|
Number of pages | 5 |
DOIs | |
State | Published - 2024 |
Event | 2024 IEEE Power & Energy Society General Meeting - Seattle, Washington Duration: 21 Jul 2024 → 25 Jul 2024 |
Conference
Conference | 2024 IEEE Power & Energy Society General Meeting |
---|---|
City | Seattle, Washington |
Period | 21/07/24 → 25/07/24 |
NREL Publication Number
- NREL/CP-5D00-92053
Keywords
- deep reinforcement learning
- microgrid
- peak load management
- voltage regulation