Published December 21, 2020 | Version Recording made after the presentation, on the 20th
Presentation Restricted

How to bridge the gap between current deep learning and current neuromorphic computing?

Creators

  • 1. Lambert

Description

Presentation given at the CogniGron Center in the CogniGron@Work series

18/12/2020

How to bridge the gap between current deep learning and current neuromorphic computing?

Lambert Schomaker

Abstract

›It has become clear that the researchers in neuromorphic computing focus on training methods for multi-layer perceptrons pointing to a large number of layers in a network as the explanation for the  'deep' aspect of neural computing. However, the success of deep learning in current artificial neural networks (CNNs) is due to many factors, the most important of them being the use of 2D convolutions. This means that each layer consists of trainable filters. This requires that a square patch of pixels is dynamically slid over a large 2D field (image) with a stride of 1 pixel in both directions, performing the weight matrix times vector multiplication, at each x,y position. For each hidden unit, there is also a memory field (the feature map), storing and delivering the input to the filters of the next layer. Such a dynamic functionality heavily relies on Turing/von Neuman controllers outside of the core of the neuromorphic weight-update hardware (e.g., a crossbar with memristors).  These complicating facts diminish the poignancy of the 'low energy' argument: There are more computations needed than the multiply-add operator. Consequently, there should be a strong need to also address trainable filters, in materials science. As an example, a tedious convolution in x,y can be replaced by an effective one-shot optical filter over a complete 2D plane.  For electric variants of neuromorphic computing, a similar wide-field lensing would need to be implemented. Intermediate-representation images at the level of a hidden unit require persistence, at least for the time period that a filter in the next layer is receiving the (usually 2D) pattern. Only when this is addressed can we realize a complete emulation of current deep learning in novel hardware, in CogniGron.

Target audience: Physicists in materials science with some understanding of traditional neural networks but with an interest in deep learning and how to address it by using neuromorphic materials.

Several references to image sources are given within the .pptx, but
not all.

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Request access

If you would like to request access to these files, please fill out the form below.

You need to satisfy these conditions in order for this request to be accepted:

Please send an email to l.r.b.schomaker@rug.nl
(Alt: lrb.schomaker@gmail.com) to ask permission.

You are currently not logged in. Do you have an account? Log in here

Additional details

References

  • Schomaker, LRB (2020). How to bridge the gap between current deep learning and current neuromorphic computing? [lecture] Presented at the CogniGron@Work meeting of December 18th 2020, CogniGron Center, Faculty of Science & Engineering, University of Groningen.
  • Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel and Baird H. S.: Constrained Neural Network for Unconstrained Handwritten Digit Recognition, in Suen, C (Eds), Frontiers in Handwriting Recognition, CENPARMI, Concordia University, Montreal, 1990
  • Schomaker, LRB (1992). A neural oscillator-network model of temporal pattern generation, Human Movement Science, 11(1–2), pp. 181-192, ISSN 0167-9457, DOI 10.1016/0167-9457(92)90059-K.
  • Hall, Tyson S.; Hasler, Paul; Anderson, David V. (2002). Field Programmable Analog Arrays: A Floating-Gate Approach. Lecture Notes in Computer Science. 2438. pp. 424–433.
  • Zoltán Molnár, Kathleen S. Rockland (2020). Cortical columns, In: Neural Circuit and Cognitive Development (2nd Edition), Academic Press, 2020, pp. 103-126,ISBN 9780128144114
  • Heitmann, S., & Ermentrout, G. B. (2020). Direction-selective motion discrimination by traveling waves in visual cortex. PLoS computational biology, 16(9), e1008164.
  • Barrett, L., Simmons, W. Interoceptive predictions in the brain. Nat Rev Neurosci 16, 419–429 (2015). https://doi.org/10.1038/nrn3950
  • Fitzgerald, M.J., Gruener, G., & Mtui, E. (2012) Clinical neuroanatomy and neuroscience, 6th edition pp. 300-303.
  • Jang et al., (2020). Cell Reports, 30, pp. 3270–3279. https://doi.org/10.1016/j.celrep.2020.02.038
  • Giacomo Pedretti, Valerio Milo, Shahin Hashemkhani, Piergiulio Mannocci, Octavian Melnic, Elisabetta Chicca, Daniele Ielmini (2020). IEEE International Symposium on Circuits and Systems (ISCAS), p. 1-5.