Exploring the Limits of Deep Learning: A Study on Overfitting and Regularization Techniques
Description
This paper aims to investigate the limits of deep learning by exploring the issue of overfitting in deep neural networks and the various regularization techniques used to mitigate it. Overfitting in deep learning occurs when a model is too complex and learns the training data too well, resulting in poor generalization performance on unseen data. To address this issue, regularization techniques such as L1 and L2 regularization, dropout, and early stopping are discussed and compared. The effectiveness of each technique is evaluated through experiments on standard benchmark datasets. Additionally, the use of ensemble methods for deep learning regularization is explored. The results of this study provide insights into the limitations of deep learning and the various approaches that can be used to improve its generalization performance.
Files
Exploring_the_Limits_of_Deep_Learning__A_Study_on_Overfitting_and_Regularization_Techniques-2[1].pdf
Files
(116.3 kB)
Name | Size | Download all |
---|---|---|
Exploring_the_Limits_of_Deep_Learning__A_Study_on_Overfitting_and_Regularization_Techniques-2[1].pdf
md5:0825fd64613453b41e080c07bf116dfd
|
116.3 kB | Preview Download |
Additional details
Related works
- Is reviewed by
- Journal article: 10.1109/ISMAR.2019.00021 (DOI)