Presentation Open Access

Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent

TURINICI, Gabriel; AYADI, Imen

Presentation at the ICPR 2021 conference

The minimization of the loss function is of paramount importance in deep neural networks. Many popular optimization algorithms have been shown to correspond to some evolution equation of gradient flow type. Inspired by the numerical schemes used for general evolution equations, we introduce a second-order stochastic Runge Kutta method and show that it yields a consistent procedure for the minimization of the loss function. In addition, it can be coupled, in an adaptive framework, with the Stochastic Gradient Descent (SGD) to adjust automatically the learning rate of the SGD. The resulting adaptive SGD, called SGD-G2, shows good results in terms of convergence speed when tested on standard data-sets.

Files (466.7 kB)
Name Size
ICPR_2021_Turinici_Ayadi_v1.pdf
md5:6c0999494ab95b613fcb977e8f1c4687
466.7 kB Download
274
31
views
downloads
All versions This version
Views 274274
Downloads 3131
Data volume 14.5 MB14.5 MB
Unique views 263263
Unique downloads 3030

Share

Cite as