Journal article Open Access
Francesco Malandrino; Carla Fabiana Chiasserini
Under the federated learning paradigm, a set of nodes can cooperatively train a machine learning model with the help of a centralized server. Such a server is also tasked with assigning a weight to the information received from each node, and often also to drop too-slow nodes from the learning process. Both decisions have major impact on the resulting learning performance, and can interfere with each other in counter-intuitive ways. In this paper, we focus on edge networking scenarios and investigate existing and novel approaches to such model-weighting and node-dropping decisions. Leveraging a set of real- world experiments, we find that popular, straightforward decision-making approaches may yield poor performance, and that considering the quality of data in addition to its quantity can substantially improve learning.
Name | Size | |
---|---|---|
cameraready.pdf
md5:a3ccdcfe8e5fa9089e424e51613d5098 |
244.7 kB | Download |
All versions | This version | |
---|---|---|
Views | 207 | 207 |
Downloads | 77 | 77 |
Data volume | 18.8 MB | 18.8 MB |
Unique views | 172 | 172 |
Unique downloads | 72 | 72 |