Published October 5, 2015 | Version v1
Thesis Open

A study on the similarities of Deep Belief Networks and Stacked Autoencoders

  • 1. KTH Royal Institute of Technology

Contributors

Supervisor:

  • 1. RISE

Description

Restricted Boltzmann Machines (RBMs) and autoencoders have been used-in several variants-for similar tasks, such as reducing dimensionality or extracting features from signals. Even though their structures are quite similar, they rely on different training theories. Lately, they have been largely used as building blocks in deep learning architectures that are called deep belief networks (instead of stacked RBMs) and stacked autoencoders. In light of this, the student has worked on this thesis with the aim to understand the extent of the similarities and the overall pros and cons of using either RBMs, autoencoders or denoising autoencoders in deep networks. Important characteristics are tested, such as the robustness to noise, the influence on training of the availability of data and the tendency to overtrain. The author has then dedicated part of the thesis to study how the three deep networks in exam form their deep internal representations and how similar these can be to each other. In result of this, a novel approach for the evaluation of internal representations is presented with the name of F-Mapping. Results are reported and discussed.

Files

deGiorgio2015.pdf

Files (6.5 MB)

Name Size Download all
md5:8fcfc325bbfe50a03743a9ae0cf74dfe
6.5 MB Preview Download

Additional details