Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published April 1, 2019 | Version v1
Journal article Open

Supporting the Momentum Training Algorithm Using a Memristor-Based Synapse

Description

Despite the increasing popularity of deep neural
networks (DNNs), they cannot be trained efficiently on existing
platforms, and efforts have thus been devoted to designing
dedicated hardware for DNNs. In our recent work, we have
provided direct support for the stochastic gradient descent (SGD)
training algorithm by constructing the basic element of neural
networks, the synapse, using emerging technologies, namely
memristors. Due to the limited performance of SGD, optimization
algorithms are commonly employed in DNN training. Therefore,
DNN accelerators that only support SGD might not meet DNN
training requirements. In this paper, we present a memristorbased
synapse that supports the commonly used momentum
algorithm. Momentum significantly improves the convergence of
SGD and facilitates the DNN training stage. We propose two
design approaches to support momentum: 1) a hardware friendly
modification of the momentum algorithm using memory external
to the synapse structure, and 2) updating each synapse with a
built-in memory. Our simulations show that the proposed DNN
training solutions are as accurate as training on a GPU platform
while speeding up the performance by 886× and decreasing
energy consumption by 7×, on average.

Files

6.pdf

Files (4.1 MB)

Name Size Download all
md5:2eddd0f97cd9d8f4f7393cb0be3a8552
4.1 MB Preview Download

Additional details

Funding

Real-PIM-System – Memristive In-Memory Processing System 757259
European Commission