Indexed by:
Abstract:
Mobile edge computing (MEC) provides high-quality network service to mobile users. In a MEC network, the placement of edge servers not only affects the service experience of users, but also has a great influence on the energy consumption of the MEC system. In this paper, we study the edge server placement problem in dynamic MEC scenarios, to minimize the energy consumption of the MEC system in real time, so as to achieve the goal of energy saving at every time. We propose a reinforcement learning (RL) based approach called ESDR. Moreover, in order to make RL suitable for our proposed problem and enable the agent to explore better in MEC scenarios, we introduce Double Q-learning and delaying policy updates to improve the original RL framework. Furthermore, experiments are conducted using a real-world dataset. The results show that ESDR is outstanding in terms of energy saving with fewer edge servers than other approaches in dynamic MEC scenarios. © 2023 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
Year: 2023
Page: 183-187
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: