Published November 13, 2021 | Version Preprint
Conference paper Open

Parallel/Distributed Intelligent Hyperparameters Search for Generative Artificial Neural Networks

  • 1. Universidad de la República
  • 2. University of Malaga

Description

This article presents a parallel/distributed methodology for the intelligent search of the hyperparameters configuration for generative artificial neural networks (GANs). Finding the configuration that best fits a GAN for a specific problem is challenging because GANs simultaneously train two deep neural networks. Thus, in general, GANs have more configuration parameters than other deep learning methods. The proposed system applies the iterated racing approach taking advantage of parallel/distributed computing for the efficient use of resources for configuration. The main results of the experimental evaluation performed on the MNIST dataset showed that the parallel system is able to efficiently use the GPU, achieving a high level of parallelism and reducing the computational wall clock time by 78\%, while providing competitive comparable results to the sequential hyperparameters search.

Notes

This is the pre-print version of the paper presented during the Second Workshop on Machine Learning on HPC Systems 2021. ISC High Performance 2021. Cite this paper as: Esteban M., Toutouh J., Nesmachnow S. (2021) Parallel/Distributed Intelligent Hyperparameters Search for Generative Artificial Neural Networks. In: Jagode H., Anzt H., Ltaief H., Luszczek P. (eds) High Performance Computing. ISC High Performance 2021. Lecture Notes in Computer Science, vol 12761. Springer, Cham.

Files

_MLHPCS_2021__IRACE_GANs (5).pdf

Files (766.2 kB)

Name Size Download all
md5:4e5f8e38d2a8fca88c7f0dfd7e10efa4
766.2 kB Preview Download

Additional details

Funding

European Commission
TAILOR - Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization 952215