Abstract
We propose a novel data-driven method to accelerate the convergence of Alternating Direction Method of Multipliers (ADMM) for solving distributed DC optimal power flow (DC-OPF) where lines are shared between independent network partitions. Using previous observations of ADMM trajectories for a given system under varying load, the method trains a recurrent neural network (RNN) to predict the converged values of dual and consensus variables. Given a new realization of system load, a small number of initial ADMM iterations is taken as input to infer the converged values and directly inject them into the iteration. We empirically demonstrate that the online injection of these values into the ADMM iteration accelerates convergence by a significant factor for partitioned 14-, 118-and 2848-bus test systems under differing load scenarios. The proposed method has several advantages: it maintains the security of private decision variables inherent in consensus ADMM; inference is fast and so may be used in online settings; RNN-generated predictions can dramatically improve time to convergence but, by construction, can never result in infeasible ADMM subproblems; it can be easily integrated into existing software implementations. While we focus on the ADMM formulation of distributed DC-OPF in this letter, the ideas presented are naturally extended to other distributed optimization problems.
Original language | American English |
---|---|
Article number | 9295312 |
Pages (from-to) | 1-6 |
Number of pages | 6 |
Journal | IEEE Control Systems Letters |
Volume | 6 |
DOIs | |
State | Published - 2022 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
NREL Publication Number
- NREL/JA-2C00-78731
Keywords
- alternating direction method of multipliers
- data-driven optimization
- DC optimal power flow
- machine learning
- recurrent neural network