Thesis Open Access

A study on the similarities of Deep Belief Networks and Stacked Autoencoders

Andrea de Giorgio

Thesis supervisor(s)

Anders Holst

Restricted Boltzmann Machines (RBMs) and autoencoders have been used-in several variants-for similar tasks, such as reducing dimensionality or extracting features from signals. Even though their structures are quite similar, they rely on different training theories. Lately, they have been largely used as building blocks in deep learning architectures that are called deep belief networks (instead of stacked RBMs) and stacked autoencoders. In light of this, the student has worked on this thesis with the aim to understand the extent of the similarities and the overall pros and cons of using either RBMs, autoencoders or denoising autoencoders in deep networks. Important characteristics are tested, such as the robustness to noise, the influence on training of the availability of data and the tendency to overtrain. The author has then dedicated part of the thesis to study how the three deep networks in exam form their deep internal representations and how similar these can be to each other. In result of this, a novel approach for the evaluation of internal representations is presented with the name of F-Mapping. Results are reported and discussed.

Files (6.5 MB)
Name Size
deGiorgio2015.pdf
md5:8fcfc325bbfe50a03743a9ae0cf74dfe
6.5 MB Download
50
42
views
downloads
All versions This version
Views 5050
Downloads 4242
Data volume 275.1 MB275.1 MB
Unique views 4242
Unique downloads 3636

Share

Cite as