• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Quan, Shengwei (Quan, Shengwei.) [1] | He, Hongwen (He, Hongwen.) [2] | Wei, Zhongbao (Wei, Zhongbao.) [3] | Chen, Jinzhou (Chen, Jinzhou.) [4] | Zhang, Zhendong (Zhang, Zhendong.) [5] | Wang, Ya-Xiong (Wang, Ya-Xiong.) [6] (Scholars:王亚雄)

Indexed by:

EI Scopus SCIE

Abstract:

Deep reinforcement learning (DRL) has been widely used in the field of automotive energy management. However, DRL is computationally inefficient and less robust, making it difficult to be applied to practical systems. In this article, a customized energy management strategy based on the deep reinforcement learning-model predictive control (DRL-MPC) self-regulation framework is proposed for fuel cell electric vehicles. The soft actor critic (SAC) algorithm is used to train the energy management strategy offline, which minimizes system comprehensive consumption and lifetime degradation. The trained SAC policy outputs the sequence of fuel cell actions at different states in the prediction horizon as the initial value of the nonlinear MPC solution. Under the MPC framework, iterative computation is carried out for nonlinear optimization problems to optimize action sequences based on SAC policy. In addition, the vehicle's usual operation dataset is collected to customize the update package for further improvement of the energy management effect. The DRL-MPC can optimize the SAC policy action at the state boundary to reduce system lifetime degradation. The proposed strategy also shows better optimization robustness than SAC strategy under different vehicle loads. Moreover, after the update package application, the total cost is reduced by 5.93% compared with SAC strategy, which has better optimization under comprehensive condition with different vehicle loads.

Keyword:

Batteries Costs Customized energy management Degradation Energy management fuel cell and battery degradation fuel cell electric vehicle Fuel cells model predictive control Optimization reinforcement learning State of charge

Community:

  • [ 1 ] [Quan, Shengwei]Beijing Inst Technol, Natl Key Lab Adv Vehicle Integrat & Control, Beijing 100081, Peoples R China
  • [ 2 ] [He, Hongwen]Beijing Inst Technol, Natl Key Lab Adv Vehicle Integrat & Control, Beijing 100081, Peoples R China
  • [ 3 ] [Wei, Zhongbao]Beijing Inst Technol, Natl Key Lab Adv Vehicle Integrat & Control, Beijing 100081, Peoples R China
  • [ 4 ] [Chen, Jinzhou]Beijing Inst Technol, Natl Key Lab Adv Vehicle Integrat & Control, Beijing 100081, Peoples R China
  • [ 5 ] [Zhang, Zhendong]Beijing Inst Technol, Natl Key Lab Adv Vehicle Integrat & Control, Beijing 100081, Peoples R China
  • [ 6 ] [Wang, Ya-Xiong]Fuzhou Univ, Sch Mech Engn & Automat, Fuzhou 350108, Peoples R China

Reprint 's Address:

  • [He, Hongwen]Beijing Inst Technol, Natl Key Lab Adv Vehicle Integrat & Control, Beijing 100081, Peoples R China;;

Show more details

Related Keywords:

Source :

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS

ISSN: 1551-3203

Year: 2024

Issue: 12

Volume: 20

Page: 13776-13785

1 1 . 7 0 0

JCR@2023

Cited Count:

WoS CC Cited Count: 1

SCOPUS Cited Count: 1

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 4

Online/Total:220/9886996
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1