Indexed by:
Abstract:
Reinforcement learning, as an efficient method for solving uncertainty decision making in power systems, is widely used in multi-stage stochastic power dispatch and dynamic optimization. However, the low generalization and practicality of traditional reinforcement learning algorithms limit their online application. The dispatch strategy learned offline can only adapt to specific scenarios, and its policy performance degrades significantly if the sample drastically change or the topology variation. To fill these gaps, a novel contextual meta graph reinforcement learning (Meta-GRL) method a more general contextual Markov decision process (CMDP) modeling are proposed. The proposed Meta-GRL adopts CMDP scheme and graph representation, extracts and encodes the differentiated scene context, and can be extended to various scene changes. The upper meta-learner embedded in context in Meta-GRL is proposed to realize scene recognition. While the lower base-earner is guided to learn generalized context-specified policy. The test results in IEEE39 and open environment show that the Meta-GRL achieves more than 90% optimization and entire period applicability under the premise of saving computing resources. © 2024 The Authors
Keyword:
Reprint 's Address:
Email:
Source :
International Journal of Electrical Power and Energy Systems
ISSN: 0142-0615
Year: 2024
Volume: 162
5 . 0 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: