Abstract
Buildings, as major energy consumers, can provide great untapped demand response (DR) resources for grid services. However, their participation remains low in real-life. One major impediment for popularizing DR in buildings is the lack of cost-effective automation systems that can be widely adopted. Existing optimization-based smart building control algorithms suffer from high costs on both building-specific modeling and on-demand computing resources. To tackle these issues, this paperproposes a cost-effective edge-cloud integrated solution using reinforcement learning (RL). Beside RL's ability to solve sequential optimal decision-making problems, its adaptability to easy-to-obtain building models and the off-line learning feature are likely to reduce the controller's implementation cost. Using a surrogate building model learned automatically from building operation data, an RL agent learns an optimal control policy on cloud infrastructure, and the policy is then distributed to edge devices for execution. Simulation results demonstrate the control efficacy and the learning efficiency in buildings of different sizes. A preliminary cost analysis on a 4-zone commercial building shows the annual cost for optimal policy training is only 2.25% of the DR incentive received. Results of this study show a possible approach with higher return on investment for buildingsto participate in DR programs.
Original language | American English |
---|---|
Article number | 9161266 |
Pages (from-to) | 420-431 |
Number of pages | 12 |
Journal | IEEE Transactions on Smart Grid |
Volume | 12 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2021 |
Bibliographical note
Publisher Copyright:© 2020 IEEE.
NREL Publication Number
- NREL/JA-2C00-76186
Keywords
- air-conditioning
- cloud computing
- Demand response
- reinforcement learning
- smart building