Conference paper Open Access

Dynamic Split Computing for Efficient Deep Edge Intelligence

Arian Bakhtiarnia; Nemanja Milosevic; Qi Zhang; Dragana Bajovic; Alexandros Iosifidis

Deploying deep neural networks (DNNs) on IoT and mobile devices is a challenging task due to their limited computational resources. Thus, demanding tasks are often entirely offloaded to edge servers which can accelerate inference, however, it also causes communication cost and evokes privacy concerns. In addition, this approach leaves the computational capacity of end devices unused. Split computing is a paradigm where a DNN is split into two sections; the first section is executed on the end device, and the output is transmitted to the edge server where the final section is executed. Here, we introduce dynamic split computing, where the optimal split location is dynamically selected based on the state of the communication channel. By using natural bottlenecks that already exist in modern DNN architectures, dynamic split computing avoids retraining and hyperparameter optimization, and does not have any negative impact on the final accuracy of DNNs. Through extensive experiments, we show that dynamic split computing achieves faster inference in edge computing environments where the data rate and server load vary over time.

 

The work received funding by the European Union's Horizon 2020 research and innovation programme under grant agreement No 957337, and by the Danish Council for Independent Research under Grant No. 9131-00119B.
Files (970.6 kB)
Name Size
Dynamic_Split_Computing.pdf
md5:b7e19188e1945250432fbb94d07e515e
970.6 kB Download
137
44
views
downloads
All versions This version
Views 137137
Downloads 4444
Data volume 42.7 MB42.7 MB
Unique views 122122
Unique downloads 4141

Share

Cite as