Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems

Yujie Tang, Emiliano Dall'Anese, Andrey Bernstein, Steven Low

Research output: Contribution to journalArticlepeer-review

1 Scopus Citations

Abstract

This paper focuses on a time-varying constrained nonconvex optimization problem, and considers the synthesis and analysis of online regularized primal-dual gradient methods to track a Karush-Kuhn-Tucker (KKT) trajectory. The proposed regularized primal-dual gradient method is implemented in a running fashion, in the sense that the underlying optimization problem changes during the execution of the algorithms. In order to study its performance, we first derive its continuous-time limit as a system of differential inclusions. We then study sufficient conditions for tracking a KKT trajectory, and also derive asymptotic bounds for the tracking error (as a function of the time-variability of a KKT trajectory). Further, we provide a set of sufficient conditions for the KKT trajectories not to bifurcate or merge, and also investigate the optimal choice of the parameters of the algorithm. Illustrative numerical results for a time-varying nonconvex problem are provided.
Original languageAmerican English
Pages (from-to)1970-1990
Number of pages21
JournalSIAM Journal on Control and Optimization
Volume60
Issue number4
DOIs
StatePublished - 2022

NREL Publication Number

  • NREL/JA-5D00-83850

Keywords

  • differential inclusion
  • gradient methods
  • nonconvex optimization
  • primal-dual dynamics
  • time-varying optimization
  • tracking

Fingerprint

Dive into the research topics of 'Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems'. Together they form a unique fingerprint.

Cite this