Thesis Open Access

Deep Learning at Scale with Nearest Neighbours Communications

Viviani, Paolo

Thesis supervisor(s)

Aldinucci, Marco

As deep learning techniques become more and more popular, there is the need to move these applications from the data scientist’s Jupyter notebook to efficient and reliable enterprise solutions. Moreover, distributed training of deep learning models will happen more and more outside the well-known borders of cloud and HPC infrastructure and will move to edge and mobile platforms. Current techniques for distributed deep learning have drawbacks in both these scenarios, limiting their long-term applicability. 

After a critical review of the established techniques for Data Parallel training from both a distributed computing and deep learning perspective, a novel  approach based on nearest-neighbour communications is presented in order to overcome some of the issues related to mainstream approaches, such as global communication patterns. Moreover, in order to validate the proposed strategy, the Flexible Asynchronous Scalable Training (FAST) framework is introduced, which allows to apply the nearest-neighbours communications approach to a deep learning framework of choice. 

Finally, a relevant use-case is deployed on a medium-scale infrastructure to demonstrate both the framework and the methodology presented. Training convergence and scalability results are presented and discussed in comparison to a baseline defined by using state-of-the-art distributed training tools provided by a well-known deep learning framework.

Files (2.3 MB)
Name Size
2.3 MB Download
All versions This version
Views 109109
Downloads 268267
Data volume 627.5 MB625.2 MB
Unique views 103103
Unique downloads 262261


Cite as