Abstract
This paper proposes attention enabled multi-agent deep reinforcement learning (MADRL) framework for active distribution network decentralized Volt-VAR control. Using the unsupervised clustering, the whole distribution system can be decomposed into several sub-networks according to the voltage and reactive power sensitivity relationships. Then, the distributed control problem of each sub-network is modeled as Markov games and solved by the improved MADRL algorithm, where each sub-network is modeled as an adaptive agent. An attention mechanism is developed to help each agent focus on specific information that is mostly related to the reward. All agents are centrally trained offline to learn the optimal coordinated Volt-VAR control strategy and executed in a decentralized manner to make online decisions with only local information. Compared with other distributed control approaches, the proposed method can effectively deal with uncertainties, achieve fast decision makings, and significantly reduce the communication requirements. Comparison results with model-based and other data-driven methods on IEEE 33-bus and 123-bus systems demonstrate the benefits of the proposed approach.
Original language | American English |
---|---|
Article number | 9347807 |
Pages (from-to) | 1582-1592 |
Number of pages | 11 |
Journal | IEEE Transactions on Sustainable Energy |
Volume | 12 |
Issue number | 3 |
DOIs | |
State | Published - Jul 2021 |
Bibliographical note
Publisher Copyright:© 2010-2012 IEEE.
NREL Publication Number
- NREL/JA-5D00-80594
Keywords
- distribution network, PV inverters
- distribution system optimization
- multi-agent deep reinforcement learning
- network partition
- Voltage regulation