Published December 21, 2017
| Version v2.5.0
Software
Open
NervanaSystems/neon: Improved CPU performance for SSD and inference with batchnorm, Docker file
Creators
- Scott Leishman
- Alex Park1
- Anil Thomas
- yinyinl2
- nervetumer
- Urs Köster3
- Scott Gray4
- Augustus Odena
- wconstab
- Zach Dwiel
- Peng Zhang5
- Wei Wang6
- baojun
- John Co-Reyes
- Arjun Bansal2
- Sébastien Arnold7
- Andy
- DawnStone
- Jeevan Shankar8
- kulig
- Tyler Lee9
- wsokolow
- Tomasz Patejko5
- ruby-nervana
- Kai Arulkumaran
- Igor Kaplounenko9
- Santi Villalba
- Xin Wang6
- Steven Robertson10
- Gabriel Pereyra11
- 1. Cerebras Systems
- 2. Nervana Systems
- 3. Cerebras
- 4. OpenAI
- 5. Intel
- 6. Intel Corporation
- 7. University of Southern California
- 8. Facebook
- 9. @NervanaSystems
- 10. Intel - Nervana
- 11. Student
Description
- Optimized SSD MKL backend performance (~3X boost version over version)
- Bumped aeon version to v1.3.0
- Fixed inference performance issue of MKL batchnorm
- Fixed batch prediction issue for gpu backend
- Enabled subset_pct for MNIST_DCGAN example
- Updated "make clean" to clean up mkl artifacts
- Added dockerfile for IA mkl
Files
NervanaSystems/neon-v2.5.0.zip
Files
(3.3 MB)
Name | Size | Download all |
---|---|---|
md5:91c3290d160c9ddf38d4404a1f1c803c
|
3.3 MB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/NervanaSystems/neon/tree/v2.5.0 (URL)