Published October 30, 2020
| Version v0.4.0
Software
Open
hls-fpga-machine-learning/hls4ml: aster
Creators
- 1. Fermi National Accelerator Laboratory
- 2. UC San Diego
- 3. CERN
- 4. University of Tokyo
- 5. University of Illinois at Chicago
Description
What's new:
- Support for GarNet layer (see paper)
- Input layer precision added to config generator utility
- New 'SkipOptimizers' config option. Now you can run all Optimizers by default (as in v0.3.0) but subtract any specified by 'SkipOptimizers' e.g.
hls_config['SkipOptimizers'] = ['fuse_consecutive_batch_normalization']
- Print out the latency report from Cosimulation
Bugfixes:
- Fixes related to tensorflow 2.3: new Functional API, changes to handling of Input layer
- Fix error with config generator utility and activation layers gor
granularity='name'
- Fix issue with reloading of emulation library after configuration change
- Fix to handling of layers with
use_bias=False
and merged Dense and BatchNormalization
Files
hls-fpga-machine-learning/hls4ml-v0.4.0.zip
Files
(322.6 kB)
Name | Size | Download all |
---|---|---|
md5:a23ffd9218c6d2c9a39a6a2492d62140
|
322.6 kB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/hls-fpga-machine-learning/hls4ml/tree/v0.4.0 (URL)