Indexed by:
Abstract:
This paper develops a model-free approach to solve the event-triggered optimal consensus of multiple Euler-Lagrange systems (MELSs) via reinforcement learning (RL). Firstly, an augmented system is constructed by defining a pre-compensator to circumvent the dependence on system dynamics. Secondly, the Hamilton-Jacobi-Bellman (HJB) equations are applied to the deduction of the model-free event-triggered optimal controller. Thirdly, we present a policy iteration (PI) algorithm derived from RL, which converges to the optimal policy. Then, the value function of each agent is represented through a neural network to realize the PI algorithm. Moreover, the gradient descent method is used to update the neural network only at a series of discrete event-triggered instants. The specific form of the event-triggered condition is then proposed, and it is guaranteed that the closed-loop augmented system under the event-triggered mechanism is uniformly ultimately bounded (UUB). Meanwhile, the Zeno behavior is also eliminated. Finally, the validity of this approach is verified by a simulation example.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING
ISSN: 2327-4697
Year: 2021
Issue: 1
Volume: 8
Page: 246-258
5 . 0 3 3
JCR@2021
6 . 7 0 0
JCR@2023
ESI Discipline: ENGINEERING;
ESI HC Threshold:105
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 23
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: