deephyper/deephyper: Changelog - DeepHyper 0.2.5
Creators
- 1. Argonne Leadership Computing Facility
- 2. Argonne National Laboratory
- 3. Northwestern University
- 4. Argonne National Lab
- 5. William & Mary
Description
General Full API documentation
The DeepHyper API is now fully documented at DeepHyper API
Tensorflow-Probability as a new dependencyTensorFlow Probability is now part of DeepHyper default set of dependencies
Automatique submission with Ray at ALCFIt is now possible to directly submit with deephyper ray-submit ...
for DeepHyper at the ALCF. This feature is only available on ThetaGPU for now but can be extended to other systems by following this script.
- New installation documentation is available at Installation ThetaGPU (ALCF)
- A new user guide is available at Running on ThetaGPU (ALCF) to understand how to run manually and automatically DeepHyper on ThetaGPU.
The access to auto-sklearn features was changed to deephyper.sklearn
and a new documentation is available for this feature at User guide: AutoSklearn
The deephyper-analytics
command was modified and enhanced with new features. The see the full updated documentation follow DeepHyper Analytics Tools.
The topk
command is now available to have quick feedback from the results of an experiment:
$ deephyper-analytics topk combo_8gpu_8_agebo/infos/results.csv -k 2
'0':
arch_seq: '[229, 0, 22, 1, 1, 53, 29, 1, 119, 1, 0, 116, 123, 1, 273, 0, 1, 388]'
batch_size: 59
elapsed_sec: 10259.2741303444
learning_rate: 0.0001614947
loss: log_cosh
objective: 0.9236862659
optimizer: adam
patience_EarlyStopping: 22
patience_ReduceLROnPlateau: 10
'1':
arch_seq: '[229, 0, 22, 0, 1, 235, 29, 1, 313, 1, 0, 116, 123, 1, 37, 0, 1, 388]'
batch_size: 51
elapsed_sec: 8818.2674164772
learning_rate: 0.0001265946
loss: mae
objective: 0.9231553674
optimizer: nadam
patience_EarlyStopping: 23
patience_ReduceLROnPlateau: 14
Neural architecture search
New documentation for the problem definition
A new documentation for the neural architecture search problem setup can be found here.
It is now possible to defined auto-tuned hyperparameters in addition of the architecture in a NAS Problem.
New Algorithms for Joint Hyperparameter and Neural Architecture SearchThree new algorithms are available to run a joint Hyperparameter and neural architecture search. The Hyperparameter optimisation is defined as HPO and neural architecture search as NAS.
agebo
(Aging Evolution for NAS with Bayesian Optimisation for HPO)ambsmixed
(an extension of Asynchronous Model-Based Search for HPO + NAS)regevomixed
(an extension of regularised evolution for HPO + NAS)
A new run function to use data-parallelism during neural architecture search is available (link to code)
To use this function pass it to the run argument of the command line such as:
deephyper nas agebo ... --run deephyper.nas.run.tf_distributed.run ... --num-cpus-per-task 2 --num-gpus-per-task 2 --evaluator ray --address auto ...
This function allows for new hyperparameters in the Problem.hyperparameters(...)
:
...
Problem.hyperparameters(
...
lsr_batch_size=True,
lsr_learning_rate=True,
warmup_lr=True,
warmup_epochs=5,
...
)
...
Optimization of the input pipeline for the training
The data-ingestion pipeline was better optimised to reduce the overheads on GPU instances:
self.dataset_train = (
self.dataset_train.cache()
.shuffle(self.train_size, reshuffle_each_iteration=True)
.batch(self.batch_size)
.prefetch(tf.data.AUTOTUNE)
.repeat(self.num_epochs)
)
Easier model generation from Neural Architecture Search results
A new method is now available from the Problem object Problem.get_keras_model(arch_seq)
to easily build a Keras model instance from an arch_seq
(list encoding a neural network).
Files
deephyper/deephyper-0.2.5.zip
Files
(2.0 MB)
Name | Size | Download all |
---|---|---|
md5:7d6ea8da807dcf792f3973e6495d37ea
|
2.0 MB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/deephyper/deephyper/tree/0.2.5 (URL)