Quasi-Stochastic Approximation and Off-Policy Reinforcement Learning: Preprint

Andrey Bernstein, Yue Chen, Emiliano Dall'Anese, Prashant Mehta, Sean Meyn, Marcello Colombino

Research output: Contribution to conferencePaper

Abstract

The Robbins-Monro stochastic approximation algorithm is a foundation of many algorithmic frameworks for reinforcement learning and often an efficient approach to solving (or approximating the solution to) complex optimal control problems. In many cases, however, practitioners are unable to apply these techniques because of an inherent high variance. This paper aims to provide a general foundation for 'quasi-stochastic approximation,' in which all of the processes under consideration are deterministic, much like quasi-Monte Carlo for variance reduction in simulation. The variance reduction can be substantial, subject to tuning of pertinent parameters in the algorithm. This paper introduces a new coupling argument to establish the optimal rate of convergence provided the gain is sufficiently large. These results are established for linear models and tested also in nonideal settings. A major application of these general results is a new class of reinforcement learning algorithms for deterministic state space models. In this setting, the main contribution is a class of algorithms for approximating the value function for a given policy, using a different policy designed to introduce exploration.
Original languageAmerican English
Number of pages11
StatePublished - 2019
Event2019 IEEE Conference on Decision and Control (IEEE CDC) - Nice, France
Duration: 11 Dec 201913 Dec 2019

Conference

Conference2019 IEEE Conference on Decision and Control (IEEE CDC)
CityNice, France
Period11/12/1913/12/19

NREL Publication Number

  • NREL/CP-5D00-73518

Keywords

  • deterministic
  • off-policy reinforcement learning
  • quasi-stochastic approximation
  • rate of convergence
  • state space model

Fingerprint

Dive into the research topics of 'Quasi-Stochastic Approximation and Off-Policy Reinforcement Learning: Preprint'. Together they form a unique fingerprint.

Cite this