Published June 28, 2023 | Version Review Paper
Journal article Open

Exploring the Limits of Deep Learning: A Study on Overfitting and Regularization Techniques

  • 1. University of North Carolina at Charlotte

Description

This paper aims to investigate the limits of deep learning by exploring the issue of overfitting in deep neural networks and the various regularization techniques used to mitigate it. Overfitting in deep learning occurs when a model is too complex and learns the training data too well, resulting in poor generalization performance on unseen data. To address this issue, regularization techniques such as L1 and L2 regularization, dropout, and early stopping are discussed and compared. The effectiveness of each technique is evaluated through experiments on standard benchmark datasets. Additionally, the use of ensemble methods for deep learning regularization is explored. The results of this study provide insights into the limitations of deep learning and the various approaches that can be used to improve its generalization performance.

Files

Exploring_the_Limits_of_Deep_Learning__A_Study_on_Overfitting_and_Regularization_Techniques-2[1].pdf

Additional details

Related works

Is reviewed by
Journal article: 10.1109/ISMAR.2019.00021 (DOI)