Abstract
As the buildings sector represents over 70% of the total U.S. electricity consumption, it offers a great amount of untapped demand-side resources to tackle many critical grid-side problems and improve the overall energy system's efficiency. To help make buildings grid-interactive, this paper proposes a global-local policy search method to train a reinforcement learning (RL) based controller which optimizes building operation during both normal hours and demand response (DR) events. Experiments on a simulated five-zone commercial building demonstrate that by adding a local fine-tuning stage to the evolution strategy policy training process, the control costs can be further reduced by 7.55% in unseen testing scenarios. Baseline comparison also indicates that the learned RL controller outperforms a pragmatic linear model predictive controller (MPC), while not requiring intensive online computation.
Original language | American English |
---|---|
Number of pages | 8 |
State | Published - 2023 |
Event | Eleventh International Conference on Learning Representations (ICLR2023) - Kigali Rwanda Duration: 1 May 2023 → 5 May 2023 |
Conference
Conference | Eleventh International Conference on Learning Representations (ICLR2023) |
---|---|
City | Kigali Rwanda |
Period | 1/05/23 → 5/05/23 |
NREL Publication Number
- NREL/CP-2C00-85975
Keywords
- demand response
- grid-interactive building control
- reinforcement learning