Indexed by:
Abstract:
Mobile Edge Computing (MEC) distributes resources such as computing, storage, and bandwidth to the side close to users, which can provide low-latency services to in-vehicle users, thus promising a more efficient and safer driving environment. However, due to the dynamic scale of vehicle and the variability of resource requirements, it is a significant challenge to quickly obtain effective task offloading in large-scale vehicle scenarios. The existing studies generally adopt the centralized decision-making method, with long decision-making time and high computational overhead, which cannot effectively achieve good offloading decisions in large-scale scenarios. To address these problems, we propose a Multi-agent Collaborative Method for vehicular task offloading using Federated Deep Reinforcement Learning called MCM-FDRL. First, each vehicle as an agent, independently makes offloading decisions based on local information. Next, the offloading decision model of each vehicle is obtained through federated reinforcement learning training. At runtime, an effective vehicle offloading plan can be gradually developed through multi-agent collaboration. Using two real-world datasets, experiments show that the MCM-FDRL has good adaptability and scalability. Moreover, compared to the state-of-the-art methods, the task's average response time of the MCM-FDRL is reduced by 9.75%-64.90%, respectively. © 2002-2012 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Mobile Computing
ISSN: 1536-1233
Year: 2025
7 . 7 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: