There is a newer version of the record available.

Published August 25, 2025 | Version 0.11.0
Software Open

deephyper/deephyper: 0.11.0

  • 1. Argonne National Laboratory
  • 2. Northwestern University
  • 3. Oak Ridge National Laboratory
  • 4. Pennsylvania State University
  • 5. Argonne National Lab
  • 6. Meta
  • 7. Stevens Institute of Technology
  • 8. @argonne-lcf
  • 9. Lawrence Berkeley National Laboratory; Northwestern University
  • 10. Telecom ParisTech

Description

What's Changed

  • feat (search): solution selection columns are not performed by default by @Deathn0t in https://github.com/deephyper/deephyper/pull/335
  • fix (cbo): fit_surrogate can now process dataframe with only failures as input, refactoring max_total_failures by @Deathn0t in https://github.com/deephyper/deephyper/pull/337
  • test (callback): add test for CSVLoggerCallback by @bretteiffert in https://github.com/deephyper/deephyper/pull/344
  • chore (logging): replace context.yaml with logging by @wigging in https://github.com/deephyper/deephyper/pull/343
  • fix (tests): use tmp_path for params test by @wigging in https://github.com/deephyper/deephyper/pull/345
  • fix (numpyro): temporary workaround with numpyro and jax 0.7.0 by @Deathn0t in https://github.com/deephyper/deephyper/pull/347
  • docs (examples): add example for hpo with stopper by @bretteiffert in https://github.com/deephyper/deephyper/pull/346
  • refactor (logger): use module level logger instead of root level by @wigging in https://github.com/deephyper/deephyper/pull/349
  • chore (docs): mirror content from test_quickstart.py to index.rst by @bretteiffert in https://github.com/deephyper/deephyper/pull/350
  • feat (search): decouple search and ask/tell methods by @Deathn0t in https://github.com/deephyper/deephyper/pull/353

Full Changelog: https://github.com/deephyper/deephyper/compare/0.10.1...0.11.0

Breaking changes

DeepHyper now accepts two interfaces for the search. Instead of giving the evaluator as an input of the Search(..., evaluator) (DEPRECATED!) you now need to pass the evaluator as an input of the search.search(evaluator, max_evals) method.

Ask and Tell Interface (NEW)

The first interface is the classic configurations = search.ask(...) and search.tell(configurations_with_objective). In this case, you need to manage the computation of objectives yourself. This interface is more flexible. However, asynchronous parallel evaluations are not managed for you. You also need to be careful on the behaviour of the ask/tell methods that are algorithm's dependent. For example, ask can return the same value if called multiple times until tell is called (in Bayesian optimization, it updates the internal surrogate model state).

Here is an example:

search = create_search()
max_evals = 100
for i in range(max_evals):
    # The output is a list of dict
    config = search.ask(1)[0] 
    y = -config["x"] ** 2
    # The input is a list of tuple with dict and objective(s)
    print(f"[{i=:03d}] >>> f(x={config['x']:.3f}) = {y:.3f}") 
    search.tell([(config, y)])

Search Interface

The second interface is results = search.search(evaluator, ...) that is bonded to the evaluator and manages the loop for asynchronous parallel evaluations for you. We can execute the search for a given number of iterations by using the search.search(evaluator, max_evals=...). It is also possible to use the timeout parameter if one needs a specific time budget (e.g., restricted computational time in machine learning competitions, allocation time in HPC).

search = create_search()
results = search.search(evaluator, max_evals)

Files

deephyper/deephyper-0.11.0.zip

Files (984.8 kB)

Name Size Download all
md5:9261975b5177f0ab6017a6253f3bcae6
984.8 kB Preview Download

Additional details

Related works