Abstract
Solid oxide electrolysis cell (SOEC) hydrogen production technology can range in size from small, appliance-size equipment to large-scale, central production facilities that can be tied directly to renewable or non-greenhouse-gas-emitting forms of electricity production, making it an ideal resource for demand response (DR). The SOEC hydrogen production system is a complex integrated system that encompasses fluid dynamics, electrical dynamics, and electrochemical and thermal dynamics, all of which involve non-linearity and non-convexity. Proper control of the SOEC hydrogen production system is crucial to enable its participation in the DR program. To overcome the difficulty of designing an explicit control law for such nonlinear systems with nonconvex optimization features in DR applications, deep reinforcement learning (DRL) is explored to achieve the optimal control of the SOEC system for DR participation. Specifically, a twin delayed deterministic policy gradient (TD3) control framework is applied to achieve optimal response performance during DR events by considering power tracking error and hydrogen production efficiency with a suitable reward function. Two case studies with grid connections for tracking different DR commands were investigated. The first case study involved operating conditions reaching the boundaries, while the second involved operating conditions within the boundaries. The results showed that the proposed DRL-based control for SOEC can track the DR signal in a timely manner while maintaining high energy efficiency.
| Original language | American English |
|---|---|
| Pages (from-to) | 724-741 |
| Number of pages | 18 |
| Journal | IEEE Transactions on Energy Conversion |
| Volume | 40 |
| Issue number | 2 |
| DOIs | |
| State | Published - 2025 |
NREL Publication Number
- NREL/JA-5D00-96758
Keywords
- deep reinforcement learning
- demand response
- optimal control
- solid oxide electrolysis cell