Combining Deep Convolutional Feature Extraction with Hyperdimensional Computing for Visual Object Recognition
Description
The presented paper proposes a novel, hybrid neuromorphic computational architecture for visual data classification aimed at implementation in energy-efficient application-specific, FPGA or ASIC-based edge computing devices. The architecture combines a convolutional neural extractor that produces comprehensive representations of input patterns with a Hyperdimensional Computing (HDC) module that enables complex data analyses, including vector and vector sequence classification. As the biologically inspired HDC paradigm operates on holistic representations of concepts, we accordingly design a convolutional extractor to summarize various aspects of objects' appearance. As low energy consumption is the key design constraint, we assume that input images are delivered by energy-efficient dynamic vision sensors (event cameras). The extractor is pretrained using a three-head Convolutional Neural Network (CNN). The different CNN heads: classifier, decoder, and clusterer implement optimization objectives essential for the \enquote{holographic} concept representation. Feature vectors produced by the extractor are projected onto hyperdimensional binary vectors using an encoding unit, and they are subject to classification in the HDC module. The neural extractor is trained in limited precision mode to account for ASIC/FPGA hardware constraints. We apply the proposed architecture to classify objects (pedestrians, cars, and cyclists) from two different traffic datasets: VIRAT and KAIST. We show that the proposed concept enables solving classification problems with an accuracy that matches the performance of deep neural classifiers while being feasible for implementation in energy-efficient application-specific hardware.
Files
WCCI2022_MISEL.pdf
Files
(1.7 MB)
Name | Size | Download all |
---|---|---|
md5:df85d66444d9454a732527224ef53c02
|
1.7 MB | Preview Download |