Indexed by:
Abstract:
Thermal management of fuel cells is crucial for maintaining their efficient operation, as improper temperature regulation can significantly reduce fuel cell output performance and lifespan. With the growing global energy demand and the pursuit of sustainable energy, fuel cells play a key role in achieving renewable energy goals, particularly in the context of the United Nations Sustainable Development Goal (SDG-7). Therefore, ensuring the high efficiency and stability of fuel cells is essential for driving the transition to clean energy. However, existing thermal management strategies are primarily based on steady-state models and traditional control methods, which are inherently limited by slow response times, low control accuracy, and insufficient utilization of parasitic cooling power. These methods struggle to address the nonlinear dynamic characteristics and multivariable coupling issues of fuel cell systems. This study addresses these challenges by proposing an innovative control strategy that integrates adaptive learning and exploration mechanisms through the use of the twin delay deep deterministic policy gradient (ALEDE-TD3) algorithm based on dynamic models. The proposed method aims to minimize parasitic cooling power while overcoming the limitations of traditional approaches. By introducing dynamic exploration and learning rate decay mechanisms, the strategy significantly enhances the system's response speed and control stability, successfully mitigating the response delays caused by multivariable coupling. Simulation results show that the ALEDE-TD3 strategy outperforms TD3, DDPG, and PID algorithms across all key performance indicators, with temperature overshoot reduced by 63.1 %, 87.4 %, and 88.9 %, respectively, and average settling time reduced by 46.7 %, 56.7 %, and 59.1 %, while consistently maintaining the lowest parasitic cooling power. This approach not only improves the overall performance of the thermal management system but also holds significant implications for advancing renewable energy goals, demonstrating the immense potential of reinforcement learning in fuel cell control. © 2025
Keyword:
Reprint 's Address:
Email:
Source :
International Journal of Hydrogen Energy
ISSN: 0360-3199
Year: 2025
Volume: 146
8 . 1 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: