Abstract
To harness the great amount of untapped resources at the demand side, smart home technology plays a vital role in solving the "last mile" problem in smart grid. Reinforcement learning (RL), which has demonstrated an outstanding performance in solving many sequential decision-making problems, can be a great candidate to be used in smart home control. For instance, many studies have started investigating the load scheduling problem under dynamic pricing scheme. Based on those, this study aims at providing an affordable solution to encourage a higher smart home adoption rate. Specifically, we investigate combining transfer learning (TL) with RL to reduce the training cost of an optimal RL control policy. Given an optimal policy for a benchmark home, TL can jump-start the RL training of a policy for a new home, which has different appliances and user preferences. Simulation results show that by leveraging TL, RL training converges faster and requires much less computing time for new homes that are similar to the benchmark home. In all, this study proposes a cost-effective approach for training RL control policies for homes at scale, which ultimately reduces the controller's implementation costs, increases the adoption rate of RL controllers, and makes more homes grid-interactive.
Original language | American English |
---|---|
Number of pages | 8 |
State | Published - 2020 |
Event | First International Workshop on Reinforcement Learning for Energy Management in Buildings and Cities (RLEM) - Duration: 17 Nov 2020 → 17 Nov 2020 |
Conference
Conference | First International Workshop on Reinforcement Learning for Energy Management in Buildings and Cities (RLEM) |
---|---|
Period | 17/11/20 → 17/11/20 |
NREL Publication Number
- NREL/CP-2C00-77933
Keywords
- home energy management
- reinforcement learning
- smart home
- transfer learning