Indexed by:
Abstract:
Vehicular Edge Computing (VEC) is a feasible solution for autonomous driving as it can offload latency-sensitive and computation-intensive tasks from vehicle terminals to roadside units (RSUs) for real-time processing. Due to the high computational resource the workflow applications (i.e., autonomous driving) required, the small vehicle-to-RSUs communication range, and the scarce available resources for each vehicle, it might be difficult to accomplish complex workflow applications through the single-hop offloading paradigm in the Internet of Vehicles (IoV). In this paper, we propose a reinforcement learning (RL) based multi-hop computation offloading scheme for workflow applications to reduce their execution latency in the VEC networks, which considers the data dependency in workflow applications as well as the trust relationship and communication interference among RSUs. Firstly, a preprocessing method is employed to merge the cut-edges in a workflow to reduce the offloading scale of tasks and compress the encoding dimension of offloading solutions. Then, a Deep Q-network algorithm using Phase-optimal State update (DQPS) is proposed to update the offloading policy distribution, in which the deviation of the phase-optimal state from the current state is estimated as the reward of RL to promote the algorithm's convergence to adapt the dynamic VEC networks. Simulation results show that DQPS has the best performance compared to other benchmark schemes. Moreover, the latency of the identical applications can be reduced by 2.5%-25.3% through our offloading scheme with a multi-hop model compared to that with the single-hop paradigm in IoV. © 1975-2011 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Consumer Electronics
ISSN: 0098-3063
Year: 2024
4 . 3 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: