Published April 20, 2021 | Version v1
Preprint Open

Federated Word2Vec: Leveraging Federated Learning to Encourage Collaborative Representation Learning

  • 1. KTH Royal Institute of Technology
  • 2. RISE Research Institutes of Sweden

Description

Large scale contextual representation models have significantly advanced NLP in recent years, understanding the semantics of text to a degree never seen before. However, they need to process large amounts of data to achieve high-quality results. Joining and accessing all these data from multiple sources can be extremely challenging due to privacy and regulatory reasons. Federated Learning can solve these limitations by training models in a distributed fashion, taking advantage of the hardware of the devices that generate the data. We show the viability of training NLP models, specifically Word2Vec, with the Federated Learning protocol. In particular, we focus on a scenario in which a small number of organizations each hold a relatively large corpus. The results show that neither the quality of the results nor the convergence time in Federated Word2Vec deteriorates as compared to centralised Word2Vec.

Files

FL_W2V_paper_edited.pdf

Files (314.8 kB)

Name Size Download all
md5:10326c553e24c018f6e253b50e9f9202
314.8 kB Preview Download

Additional details

Funding

European Commission
RAIS - RAIS: Real-time Analytics for the Internet of Sports 813162