Conference paper Open Access

Evaluating Hebbian Learning in a Semi-supervised Setting

Gabriele Lagani; Fabrizio Falchi; Claudio Gennaro; Giuseppe Amato

We propose a semi-supervised learning strategy for deep Convolutional Neural Networks (CNNs) in which an unsupervised pre-training stage, performed using biologically inspired Hebbian learning algorithms, is followed by supervised end-to-end backprop fine-tuning. We explored two Hebbian learning rules for the unsupervised pre-training stage: soft-Winner-Takes-All (soft-WTA) and nonlinear Hebbian Principal Component Analysis (HPCA). Our approach was applied in sample efficiency scenarios, where the amount of available labeled training samples is very limited, and unsupervised pre-training is therefore beneficial. We performed experiments on CIFAR10, CIFAR100, and Tiny ImageNet datasets. Our results show that Hebbian outperforms Variational Auto-Encoder (VAE) pre-training in almost all the cases, with HPCA generally performing better than soft-WTA.

In: Nicosia G. et al. (eds) Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science, vol 13163. Springer, Cham. https://doi.org/10.1007/978-3-030-95467-3_2
Files (433.6 kB)
Name Size
LOD_2021_paper_152.pdf
md5:af1fb588a871a8677e7eb624703ebf24
433.6 kB Download
42
31
views
downloads
Views 42
Downloads 31
Data volume 13.4 MB
Unique views 38
Unique downloads 30

Share

Cite as