Indexed by:
Abstract:
A deep neural network (DNN) has become increasingly popular in industrial Internet of Things scenarios. Due to high demands on computational capability, it is hard for DNN-based applications to directly run on intelligent end devices with limited resources. Computation offloading technology offers a feasible solution by offloading some computation-intensive tasks to the cloud or edges. Supporting such capability is not easy due to two aspects: Adaptability: offloading should dynamically occur among computation nodes. Effectiveness: it needs to be determined which parts are worth offloading. This article proposes a novel approach, called DNNOff. For a given DNN-based application, DNNOff first rewrites the source code to implement a special program structure supporting on-demand offloading and, at runtime, automatically determines the offloading scheme. We evaluated DNNOff on a real-world intelligent application, with three DNN models. Our results show that, compared with other approaches, DNNOff saves response time by 12.4-66.6% on average.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
ISSN: 1551-3203
Year: 2022
Issue: 4
Volume: 18
Page: 2820-2829
1 2 . 3
JCR@2022
1 1 . 7 0 0
JCR@2023
ESI Discipline: ENGINEERING;
ESI HC Threshold:66
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 104
SCOPUS Cited Count: 114
ESI Highly Cited Papers on the List: 15 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1