Conference paper Open Access

Few-Shot Bayesian Optimization with Deep Kernel Surrogates

Martin Wistuba; Josif Grabocka

Hyperparameter optimization (HPO) is a central pillar in the automation of machine learning solutions and is mainly performed via Bayesian optimization, where a parametric surrogate is learned to approximate the black box response function (e.g. validation error). Unfortunately, evaluating the response function is computationally intensive. As a remedy, earlier work emphasizes the need for transfer learning surrogates which learn to optimize hyperparameters for an algorithm from other tasks. In contrast to previous work, we propose to rethink HPO as a few-shot learning problem in which we train a shared deep surrogate model to quickly adapt (with few response evaluations) to the response function of a new task. We propose the use of a deep kernel network for a Gaussian process surrogate that is meta-learned in an end-to-end fashion in order to jointly approximate the response functions of a collection of training data sets. As a result, the novel few-shot optimization of our deep kernel surrogate leads to new state-of-the-art results at HPO compared to several recent methods on diverse metadata sets.

Published as a conference paper at ICLR 2021
Files (584.1 kB)
Name Size
ICLR_Few_Shot_Bayesian_Optimization_with_Deep_Kernel_Surrogates.pdf
md5:ceb83aa4b64de638607f479f3fad1e5d
584.1 kB Download
21
16
views
downloads
All versions This version
Views 2121
Downloads 1616
Data volume 9.3 MB9.3 MB
Unique views 1818
Unique downloads 1616

Share

Cite as