• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈哲毅

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 3 >
面向边缘车联网系统的智能服务迁移方法
期刊论文 | 2025 , 37 (2) , 379-391 | 系统仿真学报
Abstract&Keyword Cite

Abstract :

针对车辆移动过程中服务质量(QoS)下降的问题,提出了一种基于凸优化使能深度强化学习的服务迁移(service migration via convex-optimization-enabled deep reinforcement learning,SeMiR)方法.将优化问题分解为两个子问题并分别求解;针对服务迁移子问题,设计了一种基于改进深度强化学习的服务迁移方法,以探索最优迁移策略;针对资源分配子问题,设计了 一种基于凸优化的资源分配方法,以推导给定迁移决策下每台MEC服务器的最优资源分配,提升服务迁移的性能.实验结果表明:与基准方法相比,SeMiR方法能够有效提升车辆的QoS,在各种场景下均展现出更加优越的性能.

Keyword :

凸优化 凸优化 服务迁移 服务迁移 深度强化学习 深度强化学习 移动边缘计算 移动边缘计算 资源分配 资源分配 车联网 车联网

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 黄思进 , 文佳 , 陈哲毅 . 面向边缘车联网系统的智能服务迁移方法 [J]. | 系统仿真学报 , 2025 , 37 (2) : 379-391 .
MLA 黄思进 等. "面向边缘车联网系统的智能服务迁移方法" . | 系统仿真学报 37 . 2 (2025) : 379-391 .
APA 黄思进 , 文佳 , 陈哲毅 . 面向边缘车联网系统的智能服务迁移方法 . | 系统仿真学报 , 2025 , 37 (2) , 379-391 .
Export to NoteExpress RIS BibTex

Version :

面向大规模IoT系统的多无人机部署与协作卸载
期刊论文 | 2025 , 37 (1) , 25-39 | 系统仿真学报
Abstract&Keyword Cite

Abstract :

在大规模物联网(internet-of-things,IoT)系统中,无人机使能的移动边缘计算(mobile edge computing,MEC)可缓解终端IoT设备的性能限制.然而,由于不均匀的IoT设备分布与低效的问题求解效率,如何在大规模IoT系统中高效执行计算卸载面临着巨大的挑战.现有解决方案通常无法适应动态多变的多无人机场景,导致了低效的资源利用与过度的响应延迟.为解决这些重要挑战,提出了一种新型的面向大规模IoT系统的多无人机部署与协作卸载(multi-UAV deployment and collaborative offloading,MUCO)方法.设计了一种基于约束K-Means聚类的无人机部署方案,在提升服务覆盖率的同时保证覆盖均衡.设计了一种基于多智能体强化学习(multi-agent reinforcement learning,MARL)的多无人机协作卸载策略,将来自IoT设备的卸载请求进行拆分与分布式执行,进而实现高效的协作卸载.大量仿真实验验证了 MUCO方法的有效性.与基准方法相比,MUCO方法在不同场景中平均可以取得约23.82%和28.13%的无人机部署性能提升,且能取得更低的时延和能耗.

Keyword :

K-Means聚类 K-Means聚类 多智能体强化学习 多智能体强化学习 无人机部署 无人机部署 物联网 物联网 移动边缘计算 移动边缘计算 计算卸载 计算卸载

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 黄智钦 , 卢恬英 , 陈哲毅 . 面向大规模IoT系统的多无人机部署与协作卸载 [J]. | 系统仿真学报 , 2025 , 37 (1) : 25-39 .
MLA 黄智钦 等. "面向大规模IoT系统的多无人机部署与协作卸载" . | 系统仿真学报 37 . 1 (2025) : 25-39 .
APA 黄智钦 , 卢恬英 , 陈哲毅 . 面向大规模IoT系统的多无人机部署与协作卸载 . | 系统仿真学报 , 2025 , 37 (1) , 25-39 .
Export to NoteExpress RIS BibTex

Version :

Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning Scopus
期刊论文 | 2024 , 23 (12) , 1-16 | IEEE Transactions on Mobile Computing
SCOPUS Cited Count: 3
Abstract&Keyword Cite

Abstract :

Through deploying computing resources at the network edge, Mobile Edge Computing (MEC) alleviates the contradiction between the high requirements of intelligent mobile applications and the limited capacities of mobile End Devices (EDs) in smart communities. However, existing solutions of computation offloading and resource allocation commonly rely on prior knowledge or centralized decision-making, which cannot adapt to dynamic MEC environments with changeable system states and personalized user demands, resulting in degraded Quality-of-Service (QoS) and excessive system overheads. To address this important challenge, we propose a novel Personalized Federated deep Reinforcement learning based computation Offloading and resource Allocation method (PFR-OA). This innovative PFR-OA considers the personalized demands in smart communities when generating proper policies of computation offloading and resource allocation. To relieve the negative impact of local updates on global model convergence, we design a new proximal term to improve the manner of only optimizing local Q-value loss functions in classic reinforcement learning. Moreover, we develop a new partial-greedy based participant selection mechanism to reduce the complexity of federated aggregation while endowing sufficient exploration. Using real-world system settings and testbed, extensive experiments demonstrate the effectiveness of the PFR-OA. Compared to benchmark methods, the PFR-OA achieves better trade-offs between delay and energy consumption and higher task execution success rates under different scenarios. IEEE

Keyword :

computation offloading computation offloading deep reinforcement learning deep reinforcement learning Mobile edge computing Mobile edge computing personalized federated learning personalized federated learning resource allocation resource allocation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Z. , Xiong, B. , Chen, X. et al. Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning [J]. | IEEE Transactions on Mobile Computing , 2024 , 23 (12) : 1-16 .
MLA Chen, Z. et al. "Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning" . | IEEE Transactions on Mobile Computing 23 . 12 (2024) : 1-16 .
APA Chen, Z. , Xiong, B. , Chen, X. , Min, G. , Li, J. . Joint Computation Offloading and Resource Allocation in Multi-edge Smart Communities with Personalized Federated Deep Reinforcement Learning . | IEEE Transactions on Mobile Computing , 2024 , 23 (12) , 1-16 .
Export to NoteExpress RIS BibTex

Version :

MEC环境中面向5G网络切片的计算卸载方法
期刊论文 | 2024 , 45 (9) , 2285-2293 | 小型微型计算机系统
Abstract&Keyword Cite

Abstract :

5G网络切片与计算卸载技术的出现,有望支持移动边缘计算(Mobile Edge Computing,MEC)系统在降低服务延迟的同时提高资源利用率,进而更好地满足不同用户的需求.然而,由于MEC系统状态的动态性与用户需求的多变性,如何有效结合网络切片与计算卸载技术仍面临着巨大的挑战.现有解决方案通常依赖于静态网络资源划分或系统先验知识,无法适应动态多变的MEC环境,造成了过度的服务延时与不合理的资源供给.为解决上述重要挑战,本文提出了一种MEC环境中面向5G网络切片的计算卸载(Computation Offloading towards Network Slicing,CONS)方法.首先,基于对历史用户请求的分析,设计了一种门控循环神经网络对未来时隙的用户请求数量进行精确预测,结合用户资源需求对网络切片进行动态调整.接着,基于网络切片资源划分的结果,设计了一种双延迟深度强化学习对计算卸载与资源分配进行决策,通过解决Q值过高估计和高方差问题,进而有效逼近动态MEC环境下的最优策略.基于真实用户通信流量数据集,大量仿真实验验证了所提的CONS方法的可行性和有效性.与其他5种基准方法相比,CONS方法能够有效地提高服务提供商的收益,且在不同场景下均展现出了更加优越的性能.

Keyword :

深度强化学习 深度强化学习 移动边缘计算 移动边缘计算 网络切片 网络切片 计算卸载 计算卸载 资源分配 资源分配

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 张俊杰 , 王鹏飞 , 陈哲毅 et al. MEC环境中面向5G网络切片的计算卸载方法 [J]. | 小型微型计算机系统 , 2024 , 45 (9) : 2285-2293 .
MLA 张俊杰 et al. "MEC环境中面向5G网络切片的计算卸载方法" . | 小型微型计算机系统 45 . 9 (2024) : 2285-2293 .
APA 张俊杰 , 王鹏飞 , 陈哲毅 , 于正欣 , 苗旺 . MEC环境中面向5G网络切片的计算卸载方法 . | 小型微型计算机系统 , 2024 , 45 (9) , 2285-2293 .
Export to NoteExpress RIS BibTex

Version :

Traffic-Aware Lightweight Hierarchical Offloading Toward Adaptive Slicing-Enabled SAGIN SCIE
期刊论文 | 2024 , 42 (12) , 3536-3550 | IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
Abstract&Keyword Cite Version(2)

Abstract :

The emerging Space-Air-Ground Integrated Networks (SAGIN) empower Mobile Edge Computing (MEC) with wider communication coverage and more flexible network access. However, the fluctuating user traffic and constrained computing architecture seriously hinder the Quality-of-Service (QoS) and resource utilization in SAGIN. Existing solutions generally depend on prior knowledge or adopt static resource provisioning, lacking adaptability and resulting in serious system overheads. To address these important challenges, we propose THOAS, a novel Traffic-aware lightweight Hierarchical Offloading framework towards Adaptive Slicing-enabled SAGIN. First, we innovatively separate SAGIN into Communication Access Platforms (CAPs) and Computation Offloading Platforms (COPs). Next, we design a new self-attention-based prediction method to accurately capture the traffic changes on each platform, enabling adaptive slice resource adjustments. Finally, we develop an improved deep reinforcement learning method based on proximal clipping with dynamic confidence intervals to reach optimal offloading. Notably, we employ knowledge distillation to compress offloading policies into lightweight networks, enhancing their adaptability in resource-limited SAGIN. Using real-world datasets of user traffic, extensive experiments are conducted. The results show that the THOAS can accurately predict traffic and make adaptive resource adjustments and offloading decisions, which outperforms other benchmark methods on multiple metrics under various scenarios.

Keyword :

Adaptation models Adaptation models Computational modeling Computational modeling computation offloading computation offloading deep reinforcement learning deep reinforcement learning Delays Delays Fluctuations Fluctuations Heuristic algorithms Heuristic algorithms Inference algorithms Inference algorithms model compression model compression Quality of service Quality of service Resource management Resource management Satellites Satellites slice resource allocation slice resource allocation Space-air-ground integrated networks Space-air-ground integrated networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Zheyi , Zhang, Junjie , Min, Geyong et al. Traffic-Aware Lightweight Hierarchical Offloading Toward Adaptive Slicing-Enabled SAGIN [J]. | IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS , 2024 , 42 (12) : 3536-3550 .
MLA Chen, Zheyi et al. "Traffic-Aware Lightweight Hierarchical Offloading Toward Adaptive Slicing-Enabled SAGIN" . | IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS 42 . 12 (2024) : 3536-3550 .
APA Chen, Zheyi , Zhang, Junjie , Min, Geyong , Ning, Zhaolong , Li, Jie . Traffic-Aware Lightweight Hierarchical Offloading Toward Adaptive Slicing-Enabled SAGIN . | IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS , 2024 , 42 (12) , 3536-3550 .
Export to NoteExpress RIS BibTex

Version :

Traffic-aware Lightweight Hierarchical Offloading towards Adaptive Slicing-enabled SAGIN Scopus
期刊论文 | 2024 , 42 (12) , 3536-3550 | IEEE Journal on Selected Areas in Communications
Traffic-Aware Lightweight Hierarchical Offloading Toward Adaptive Slicing-Enabled SAGIN EI
期刊论文 | 2024 , 42 (12) , 3536-3550 | IEEE Journal on Selected Areas in Communications
SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation CPCI-S
期刊论文 | 2024 , 1141-1150 | IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS
Abstract&Keyword Cite Version(2)

Abstract :

Federated Graph Learning (FGL) has garnered widespread attention by enabling collaborative training on multiple clients for semi-supervised classification tasks. However, most existing FGL studies do not well consider the missing inter-client topology information in real-world scenarios, causing insufficient feature aggregation of multi-hop neighbor clients during model training. Moreover, the classic FGL commonly adopts the FedAvg but neglects the high training costs when the number of clients expands, resulting in the overload of a single edge server. To address these important challenges, we propose a novel FGL framework, named SpreadFGL, to promote the information flow in edge-client collaboration and extract more generalized potential relationships between clients. In SpreadFGL, an adaptive graph imputation generator incorporated with a versatile assessor is first designed to exploit the potential links between subgraphs, without sharing raw data. Next, a new negative sampling mechanism is developed to make SpreadFGL concentrate on more refined information in downstream tasks. To facilitate load balancing at the edge layer, SpreadFGL follows a distributed training manner that enables fast model convergence. Using real-world testbed and benchmark graph datasets, extensive experiments demonstrate the effectiveness of the proposed SpreadFGL. The results show that SpreadFGL achieves higher accuracy and faster convergence against state-of-the-art algorithms.

Keyword :

Edge intelligence Edge intelligence federated graph learning federated graph learning neighbor generation neighbor generation semi-supervised learning semi-supervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhong, Luying , Pi, Yueyang , Chen, Zheyi et al. SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation [J]. | IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS , 2024 : 1141-1150 .
MLA Zhong, Luying et al. "SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation" . | IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (2024) : 1141-1150 .
APA Zhong, Luying , Pi, Yueyang , Chen, Zheyi , Yu, Zhengxin , Miao, Wang , Chen, Xing et al. SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation . | IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS , 2024 , 1141-1150 .
Export to NoteExpress RIS BibTex

Version :

SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation EI
会议论文 | 2024 , 1141-1150
SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation Scopus
其他 | 2024 , 1141-1150 | Proceedings - IEEE INFOCOM
基于深度自回归循环神经网络的边缘负载预测 CSCD PKU
期刊论文 | 2024 , 45 (02) , 359-366 | 小型微型计算机系统
Abstract&Keyword Cite Version(1)

Abstract :

为了更好地支持边缘计算服务提供商进行资源的提前配置与合理分配,负载预测被认为是边缘计算中的一项重要的技术支撑.传统的负载预测方法在面对具有明显趋势或规律性的负载时能取得良好的预测效果,但是它们无法有效地对边缘环境中高度变化的负载取得精确的预测.此外,这些方法通常将预测模型拟合到独立的时间序列上,进而进行单点负载实值预测.但是在实际边缘计算场景中,得到未来负载变化的概率分布情况会比直接预测未来负载的实值更具应用价值.为了解决上述问题,本文提出了一种基于深度自回归循环神经网络的边缘负载预测方法(Edge Load Prediction with Deep Auto-regressive Recurrent networks, ELP-DAR).所提出的ELP-DAR方法利用边缘负载时序数据训练深度自回归循环神经网络,将LSTM集成至S2S框架中,进而直接预测下一时间点负载概率分布的所有参数.因此,ELP-DAR方法能够高效地提取边缘负载的重要表征,学习复杂的边缘负载模式进而实现对高度变化的边缘负载精确的概率分布预测.基于真实的边缘负载数据集,通过大量仿真实验对所提出ELP-DAR方法的有效性进行了验证与分析.实验结果表明,相比于其他基准方法,所提出的ELP-DAR方法可以取得更高的预测精度,并且在不同预测长度下均展现出了优越的性能表现.

Keyword :

循环神经网络 循环神经网络 概率分布 概率分布 深度自回归 深度自回归 负载预测 负载预测 边缘计算 边缘计算

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 陈礼贤 , 梁杰 , 黄一帆 et al. 基于深度自回归循环神经网络的边缘负载预测 [J]. | 小型微型计算机系统 , 2024 , 45 (02) : 359-366 .
MLA 陈礼贤 et al. "基于深度自回归循环神经网络的边缘负载预测" . | 小型微型计算机系统 45 . 02 (2024) : 359-366 .
APA 陈礼贤 , 梁杰 , 黄一帆 , 陈哲毅 , 于正欣 , 陈星 . 基于深度自回归循环神经网络的边缘负载预测 . | 小型微型计算机系统 , 2024 , 45 (02) , 359-366 .
Export to NoteExpress RIS BibTex

Version :

基于深度自回归循环神经网络的边缘负载预测 CSCD PKU
期刊论文 | 2024 , 45 (2) , 359-366 | 小型微型计算机系统
多约束边环境下计算卸载与资源分配联合优化 CSCD PKU
期刊论文 | 2024 , 45 (02) , 405-412 | 小型微型计算机系统
Abstract&Keyword Cite Version(1)

Abstract :

移动边缘计算(Mobile Edge Computing, MEC)将计算与存储资源部署到网络边缘,用户可将移动设备上的任务卸载到附近的边缘服务器,得到一种低延迟、高可靠的服务体验.然而,由于动态的系统状态和多变的用户需求,MEC环境下的计算卸载与资源分配面临着巨大的挑战.现有解决方案通常依赖于系统先验知识,无法适应多约束条件下动态的MEC环境,导致了过度的时延与能耗.为解决上述重要挑战,本文提出了一种新型的基于深度强化学习的计算卸载与资源分配联合优化方法(Joint computation Offloading and resource Allocation with deep Reinforcement Learning, JOA-RL).针对多用户时序任务,JOA-RL方法能够根据计算资源与网络状况,生成合适的计算卸载与资源分配方案,提高执行任务成功率并降低执行任务的时延与能耗.同时,JOA-RL方法融入了任务优先级预处理机制,能够根据任务数据量与移动设备性能为任务分配优先级.大量仿真实验验证了JOA-RL方法的可行性和有效性.与其他基准方法相比,JOA-RL方法在任务最大容忍时延与设备电量约束下能够在时延与能耗之间取得更好的平衡,且展现出了更高的任务执行成功率.

Keyword :

多约束优化 多约束优化 深度强化学习 深度强化学习 移动边缘计算 移动边缘计算 计算卸载 计算卸载 资源分配 资源分配

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 熊兵 , 张俊杰 , 黄思进 et al. 多约束边环境下计算卸载与资源分配联合优化 [J]. | 小型微型计算机系统 , 2024 , 45 (02) : 405-412 .
MLA 熊兵 et al. "多约束边环境下计算卸载与资源分配联合优化" . | 小型微型计算机系统 45 . 02 (2024) : 405-412 .
APA 熊兵 , 张俊杰 , 黄思进 , 陈哲毅 , 于正欣 , 陈星 . 多约束边环境下计算卸载与资源分配联合优化 . | 小型微型计算机系统 , 2024 , 45 (02) , 405-412 .
Export to NoteExpress RIS BibTex

Version :

多约束边环境下计算卸载与资源分配联合优化 CSCD PKU
期刊论文 | 2024 , 45 (2) , 405-412 | 小型微型计算机系统
Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning SCIE
期刊论文 | 2024 , 35 (3) , 391-404 | IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
WoS CC Cited Count: 9
Abstract&Keyword Cite Version(2)

Abstract :

As an effective technique to relieve the problem of resource constraints on mobile devices (MDs), the computation offloading utilizes powerful cloud and edge resources to process the computation-intensive tasks of mobile applications uploaded from MDs. In cloud-edge computing, the resources (e.g., cloud and edge servers) that can be accessed by mobile applications may change dynamically. Meanwhile, the parallel tasks in mobile applications may lead to the huge solution space of offloading decisions. Therefore, it is challenging to determine proper offloading plans in response to such high dynamics and complexity in cloud-edge environments. The existing studies often preset the priority of parallel tasks to simplify the solution space of offloading decisions, and thus the proper offloading plans cannot be found in many cases. To address this challenge, we propose a novel real-time and Dependency-aware task Offloading method with Deep Q-networks (DODQ) in cloud-edge computing. In DODQ, mobile applications are first modeled as Directed Acyclic Graphs (DAGs). Next, the Deep Q-Networks (DQN) is customized to train the decision-making model of task offloading, aiming to quickly complete the decision-making process and generate new offloading plans when the environments change, which considers the parallelism of tasks without presetting the task priority when scheduling tasks. Simulation results show that the DODQ can well adapt to different environments and efficiently make offloading decisions. Moreover, the DODQ outperforms the state-of-art methods and quickly reaches the optimal/near-optimal performance.

Keyword :

Cloud computing Cloud computing Cloud-edge computing Cloud-edge computing Computational modeling Computational modeling deep reinforcement learning deep reinforcement learning dependent and parallel tasks dependent and parallel tasks Heuristic algorithms Heuristic algorithms Mobile applications Mobile applications real-time offloading real-time offloading Real-time systems Real-time systems Servers Servers Task analysis Task analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Xing , Hu, Shengxi , Yu, Chujia et al. Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning [J]. | IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS , 2024 , 35 (3) : 391-404 .
MLA Chen, Xing et al. "Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning" . | IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 35 . 3 (2024) : 391-404 .
APA Chen, Xing , Hu, Shengxi , Yu, Chujia , Chen, Zheyi , Min, Geyong . Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning . | IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS , 2024 , 35 (3) , 391-404 .
Export to NoteExpress RIS BibTex

Version :

Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning EI
期刊论文 | 2024 , 35 (3) , 391-404 | IEEE Transactions on Parallel and Distributed Systems
Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning Scopus
期刊论文 | 2024 , 35 (3) , 391-404 | IEEE Transactions on Parallel and Distributed Systems
针对差异化设备的任务卸载方法
期刊论文 | 2024 , 45 (8) , 1816-1824 | 小型微型计算机系统
Abstract&Keyword Cite

Abstract :

在边缘计算中,为缓解移动设备计算能力、存储容量受限问题,通常将部分计算密集型任务卸载至边缘服务器.然而,由于移动设备计算能力的差异,无法为所有的移动设备制定统一的卸载方案.若对每个设备均单独进行训练,则无法满足时延需求.针对这一问题,本文提出了一种差异化设备上基于联邦深度强化学习的任务卸载方法.该方法使用环境内已有移动设备的卸载经验,结合深度Q网络和联邦学习框架,构建了 一个全局模型.随后,使用新移动设备上少量经验在全局模型上微调以构建个人模型.基于多种场景的大量实验,将本文所提出方法与理想方案、Naive、全局模型和Rule-based算法进行对比.实验结果验证了本文所提出方法针对差异化设备任务卸载问题的有效性,能在花费较短时延的同时得到接近理想方案的卸载方案.

Keyword :

任务卸载 任务卸载 依赖感知 依赖感知 深度强化学习 深度强化学习 联邦学习 联邦学习 边缘计算 边缘计算

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 余楚佳 , 胡晟熙 , 林欣郁 et al. 针对差异化设备的任务卸载方法 [J]. | 小型微型计算机系统 , 2024 , 45 (8) : 1816-1824 .
MLA 余楚佳 et al. "针对差异化设备的任务卸载方法" . | 小型微型计算机系统 45 . 8 (2024) : 1816-1824 .
APA 余楚佳 , 胡晟熙 , 林欣郁 , 陈哲毅 , 陈星 . 针对差异化设备的任务卸载方法 . | 小型微型计算机系统 , 2024 , 45 (8) , 1816-1824 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 3 >

Export

Results:

Selected

to

Format:
Online/Total:261/10043484
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1