FLASH: A framework for Federated Learning with Attribute Selection and Hyperparameter optimization
Authors/Creators
Description
Federated Learning (FL) has emerged as a promising paradigm for training machine learning (ML) models on decentralized data while preserving user privacy. However, key ML optimization techniques, such as Feature Selection (FS) and Hyperparameter Optimization (HPO), are often overlooked in existing FL implementations. In this paper, we introduce FLASH, a novel framework that integrates established ML optimization methods into FL workflows. FLASH aims to reduce input data noise, enhance model accuracy, and lower model complexity. The framework enables conventional FS algorithms to work in a federated setting while maintaining the privacy of each client's data. Additionally, it supports federated HPO through collaborative parameter exploration and evaluation across clients. Experimental results demonstrate that incorporating model optimization and data noise reduction into FL can significantly improve model performance, while greatly reducing model parameter size. The FLASH source code will be publicly available upon acceptance of this paper.
Notes
Files
FLASH_preprint.pdf
Files
(452.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:6f3e11c3ca46f1caf9856c2dac49ef39
|
452.4 kB | Preview Download |