Journal article Open Access

Federated Learning at the Network Edge: When Not All Nodes are Created Equal

Francesco Malandrino; Carla Fabiana Chiasserini

Under the federated learning paradigm, a set of nodes can cooperatively train a machine learning model with the help of a centralized server. Such a server is also tasked with assigning a weight to the information received from each node, and often also to drop too-slow nodes from the learning process. Both decisions have major impact on the resulting learning performance, and can interfere with each other in counter-intuitive ways. In this paper, we focus on edge networking scenarios and investigate existing and novel approaches to such model-weighting and node-dropping decisions. Leveraging a set of real- world experiments, we find that popular, straightforward decision-making approaches may yield poor performance, and that considering the quality of data in addition to its quantity can substantially improve learning.

Files (244.7 kB)
Name Size
244.7 kB Download
All versions This version
Views 207207
Downloads 7777
Data volume 18.8 MB18.8 MB
Unique views 172172
Unique downloads 7272


Cite as