PFSL
Authors/Creators
- 1. IIT Bhilai
- 2. Google
Description
The traditional framework of federated learning requires each client to re-train their models in every iteration, making it infeasible for resource-constrained mobile devices to train deep-learning (DL) models. Split learning (SL) provides an alternative by using a centralized server to offload the computation of activation and back-propagation for a subset of the model but suffers from problems of slow convergence and lower accuracy.
In this paper, we implement PFSL, a new framework of distributed split learning where a large number of thin clients perform transfer learning in parallel, starting with a pre-trained DL model without sharing their data or labels with a central server. We implement a lightweight step of personalization of client models to provide high performance for their respective data distributions. Furthermore, we evaluate performance fairness amongst clients under a work fairness constraint for various scenarios of non-i.i.d. data distributions and unequal sample sizes. Our accuracy far exceeds that of current SL algorithms and is very close to that of centralized learning on several real-life benchmarks. It has a very low computation cost and promises to deliver the full benefits of DL to extremely thin, resource-constrained clients.
Files
PFSL-main (1).zip
Files
(166.0 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:e1aa32547d25318fcead7c18ade6cbb4
|
166.0 kB | Preview Download |