Published September 2024 | Version v1
Publication Open

A deep cut into Split Federated Self-supervised Learning

Description

Collaborative self-supervised learning has recently become feasible in highly distributed environments by dividing the network layers between client devices and a central server. However, state-of-the-art methods, such as MocoSFL, are optimized for network division at the initial layers, which decreases the protection of the client data and increases communication overhead. In this paper, we demonstrate that splitting depth is crucial for maintaining privacy and communication efficiency in distributed training. We also show that MocoSFL suffers from a catastrophic quality deterioration for the minimal communication overhead. As a remedy, we introduce Momentum-Aligned contrastive Split Federated Learning (MonAcoSFL), which aligns online and momentum client models during training procedure. Consequently, we achieve state-of-the-art accuracy while significantly reducing the communication overhead, making MonAcoSFL more practical in real-world scenarios.

Files

A deep cut into Split Federated Self-supervised Learning.pdf

Files (2.7 MB)

Additional details

Funding

ELIAS – European Lighthouse of AI for Sustainability 101120237
European Commission

Dates

Accepted
2024-09