Published May 28, 2022 | Version v1
Conference paper Open

Design of a 5-bit Signed SRAM-based In-Memory Computing Cell for Deep Learning Models

  • 1. CiTIUS, Universidade de Santiago de Compostela

Description

Neural network mixed-mode hardware accelerators
for deep convolutional neural networks (CNN) strive to cope with
a high number of input feature maps and increasing bit depths for
both weights and inputs. As an example of this need, the ResNet
model for image classification comprises 512 3 × 3 feature filters
in its conv5 layer. This would lead to 4068 multipliers driving a
summing node for actual concurrent processing of all the input
feature maps, which makes up a challenge in mixed-mode. This
paper addresses the design of a 5-bit signed SRAM-based inmemory
computing cell in 180 nm 3.3 V CMOS technology,
dealing with the impact of increasing the number of input feature
maps. The data presented in the paper are based on electrical
and post layout simulations.

Files

In_Memory_Computing_ISCAS_2022.pdf

Files (1.9 MB)

Name Size Download all
md5:eda16b926afd8e23ebc9ca9d7cf47c16
1.9 MB Preview Download

Additional details

Funding

European Commission
MISEL – MULTISPECTRAL INTELLIGENT VISION SYSTEM WITH EMBEDDED LOW-POWER NEURAL COMPUTING 101016734