Abstract
With grid modernization, smart inverters are increasingly used to execute advanced controls for distribution network reliability. However, this also increases the cyber-attack space. This paper focuses on the defense approaches to restore the system to normal operation circumstances in the presence of cyber-attacks. A unique deep reinforcement learning (DRL) method is developed to minimize voltage violations and reduce power losses for impacted feeders. The defense problem is reformulated as a Markov decision-making process to dynamically control DERs while minimizing load shedding. This is achieved via an improved soft actor-critic (SAC)-based DRL algorithm, which can govern DER set points and load-shedding scenarios in discrete and continuous modes via the auto-tune entropy and Gaussian policy features. Numerical comparison results on the modified IEEE 123-node system with other control approaches, such as Volt-VAR (VV), Volt-Watt (VW), and model predictive control (MPC) show that the proposed method can eliminate voltage violations and provide feasible control actions that perform complete mitigation of cyber-threats.
Original language | American English |
---|---|
Pages (from-to) | 4077-4089 |
Number of pages | 13 |
Journal | IEEE Transactions on Smart Grid |
Volume | 15 |
Issue number | 4 |
DOIs | |
State | Published - 2024 |
NREL Publication Number
- NREL/JA-5D00-88726
Keywords
- active distribution systems
- cyber attack
- deep reinforcement learning
- renewable generation