An adaptive DNN inference acceleration framework with end–edge–cloud collaborative computing

Liu, Guozhi and Dai, Fei and Xu, Xiaolong and Fu, Xiaodong and Dou, Wanchun and Kumar, Neeraj and Bilal, Muhammad (2023) An adaptive DNN inference acceleration framework with end–edge–cloud collaborative computing. Future Generation Computer Systems, 140. pp. 422-435. ISSN 0167-739X

Full text not available from this repository.

Abstract

Deep Neural Networks (DNNs) based on intelligent applications have been intensively deployed on mobile devices. Unfortunately, resource-constrained mobile devices cannot meet stringent latency requirements due to a large amount of computation required by these intelligent applications. Both exiting cloud-assisted DNN inference approaches and edge-assisted DNN inference approaches can reduce end-to-end inference latency through offloading DNN computations to the cloud server or edge servers, but they suffer from unpredictable communication latency caused by long wide-area massive data transmission or performance degeneration caused by the limited computation resources. In this paper, we propose an adaptive DNN inference acceleration framework, which accelerates DNN inference by fully utilizing the end–edge–cloud collaborative computing. First, a latency prediction model is built to estimate the layer-wise execution latency of a DNN on different heterogeneous computing platforms, which use neural networks to learn non-linear features related to inference latency. Second, a computation partitioning algorithm is designed to identify two optimal partitioning points, which adaptively divide DNN computations into end devices, edge servers, and the cloud server for minimizing DNN inference latency. Finally, we conduct extensive experiments on three widely-adopted DNNs, and the experimental results show that our latency prediction models can improve the prediction accuracy by about 72.31% on average compared with four baseline approaches, and our computation partitioning approach can reduce the end-to-end latency by about 20.81% on average against six baseline approaches under three wireless networks.

Item Type:
Journal Article
Journal or Publication Title:
Future Generation Computer Systems
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700/1705
Subjects:
?? DEEP NEURAL NETWORKSDNN COMPUTATION PARTITIONINGDNN INFERENCE ACCELERATIONEND–EDGE–CLOUD COLLABORATIONLATENCY PREDICTION MODELSOFTWAREHARDWARE AND ARCHITECTURECOMPUTER NETWORKS AND COMMUNICATIONS ??
ID Code:
204908
Deposited By:
Deposited On:
25 Sep 2023 15:30
Refereed?:
Yes
Published?:
Published
Last Modified:
25 Sep 2023 15:30