Federated Learning at the Network Edge: When Not All Nodes are Created Equal
Description
Under the federated learning paradigm, a set of nodes can cooperatively train a machine learning model with the help of a centralized server. Such a server is also tasked with assigning a weight to the information received from each node, and often also to drop too-slow nodes from the learning process. Both decisions have major impact on the resulting learning performance, and can interfere with each other in counter-intuitive ways. In this paper, we focus on edge networking scenarios and investigate existing and novel approaches to such model-weighting and node-dropping decisions. Leveraging a set of real- world experiments, we find that popular, straightforward decision-making approaches may yield poor performance, and that considering the quality of data in addition to its quantity can substantially improve learning.
Files
cameraready.pdf
Files
(244.7 kB)
Name | Size | Download all |
---|---|---|
md5:a3ccdcfe8e5fa9089e424e51613d5098
|
244.7 kB | Preview Download |