Non-Stationary Policy Learning for Multi-Timescale Multi-Agent Reinforcement Learning

Patrick Emami, Xiangyu Zhang, David Biagioni, Ahmed Zamzam

Research output: Contribution to conferencePaper

Abstract

In multi-timescale multi-agent reinforcement learning (MARL), agents interact across different timescales. In general, policies for time-dependent behaviors, such as those induced by multiple timescales, are non-stationary. Learning non-stationary policies is challenging and typically requires sophisticated or inefficient algorithms. Motivated by the prevalence of this control problem in real-world complex systems, we introduce a simple framework for learning non-stationary policies for multi-timescale MARL. Our approach uses available information about agent timescales to define and learn periodic multi-agent policies. In detail, we theoretically demonstrate that the effects of non-stationarity introduced by multiple timescales can be learned by a periodic multi-agent policy. To learn such policies, we propose a policy gradient algorithm that parameterizes the actor and critic with phase-functioned neural networks, which provide an inductive bias for periodicity. The framework's ability to effectively learn multi-timescale policies is validated on a gridworld and building energy management environment.
Original languageAmerican English
Number of pages7
DOIs
StatePublished - 2024
Event2023 62nd IEEE Conference on Decision and Control (CDC) - Singapore
Duration: 13 Dec 202315 Dec 2023

Conference

Conference2023 62nd IEEE Conference on Decision and Control (CDC)
CitySingapore
Period13/12/2315/12/23

NREL Publication Number

  • NREL/CP-2C00-83437

Keywords

  • control
  • multi-agent
  • multi-timescale
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Non-Stationary Policy Learning for Multi-Timescale Multi-Agent Reinforcement Learning'. Together they form a unique fingerprint.

Cite this