Parallel/Distributed Intelligent Hyperparameters Search for Generative Artificial Neural Networks
Authors/Creators
- 1. Universidad de la República
- 2. University of Malaga
Description
This article presents a parallel/distributed methodology for the intelligent search of the hyperparameters configuration for generative artificial neural networks (GANs). Finding the configuration that best fits a GAN for a specific problem is challenging because GANs simultaneously train two deep neural networks. Thus, in general, GANs have more configuration parameters than other deep learning methods. The proposed system applies the iterated racing approach taking advantage of parallel/distributed computing for the efficient use of resources for configuration. The main results of the experimental evaluation performed on the MNIST dataset showed that the parallel system is able to efficiently use the GPU, achieving a high level of parallelism and reducing the computational wall clock time by 78\%, while providing competitive comparable results to the sequential hyperparameters search.
Notes
Files
_MLHPCS_2021__IRACE_GANs (5).pdf
Files
(766.2 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:4e5f8e38d2a8fca88c7f0dfd7e10efa4
|
766.2 kB | Preview Download |