Published July 25, 2019
| Version v1
Conference paper
Open
Computational memory-based inference and training of deep neural networks
Creators
- Sebastian, Abu1
- Boybat, Irem1
- Dazzi, Martino1
- Giannopoulos, Iason1
- Jonnalagadda, Varaprasad1
- Joshi, Vinay1
- Karunaratne, Geethan1
- Kersting, Benedikt1
- Khaddam-Aljameh, Riduan1
- Nandakumar S. R.1
- Petropoulos, A.2
- Piveteau, C.1
- Antonakopoulos, T.2
- Rajendran , Bipin3
- Le Gallo, Manuel1
- Eleftheriou, Evangelos1
- 1. IBM Research, Zurich
- 2. University of Patras
- 3. NJIT
Description
In-memory computing is an emerging computing paradigm where certain computational tasks are performed in place in a computational memory unit by exploiting the physical attributes of the memory devices. Here, we present an overview of the application of in-memory computing in deep learning, a branch of machine learning that has significantly contributed to the recent explosive growth in artificial intelligence. The methodology for both inference and training of deep neural networks is presented along with experimental results using phase-change memory (PCM) devices.
Files
Y2019_sebastian_VLSI.pdf
Files
(760.8 kB)
Name | Size | Download all |
---|---|---|
md5:edd12be82dd8239a504f29cdeee3a0b1
|
760.8 kB | Preview Download |