Published July 12, 2021 | Version v1
Preprint Open

On the Resource Consumption of Distributed ML

Description

The convergence of Machine Learning (ML) with the edge computing paradigm has paved the way for distributing processing-heavy ML tasks to the network's extremes. As the edge deployment details still remain an open issue, distributed ML schemes tend to be network-agnostic; thus, their effect on the underlying network's resource consumption is largely ignored.In our work, assuming a network tree structure of varying size and edge computing characteristics, we introduce an analytical system model based on credible real-world measurements to capture the end-to-end consumption of ML schemes. In this context, we employ an edge-based (EL) and a federated (FL) ML scheme and in-depth compare their bandwidth needs and energy footprint against a cloud-based (CL) baseline approach. Our numerical evaluation suggests that EL exhibits a minimum of 25% bandwidth-efficiency compared to CL and FL, if employed by a few nodes higher in the edge network, while halving the network's energy costs.

Files

Distributed_Learning_Analytical.pdf

Files (633.2 kB)

Name Size Download all
md5:217f616057d4857e9f8096081b9209a3
633.2 kB Preview Download

Additional details

Funding

5G-IANA – 5G Intelligent Automotive Network Applications 101016427
European Commission