Published October 27, 2022 | Version 1
Presentation Open

Adaptive high order stochastic descent algorithms

Authors/Creators

Description

Slides presented at the NANMATH 2022 conference, Cluj.

Presentation abstract: motivated by statistical learning applications, the stochastic descent optimization algorithms are widely used today to tackle difficult numerical problems. One of the most known among them, the Stochastic Gradient Descent (SGD), has been extended in various ways resulting in Adam, Nesterov, momentum, etc. After a brief introduction to this framework, we introduce in this talk a new approach, called SGD-G2, which is a high order Runge-Kutta stochastic descent algorithm; the procedure allows for step adaptation in order to strike a optimal balance between convergence speed and stability.

Numerical tests on standard datasets in machine learning are also presented together with further theoretical extensions.

Files

NANMAT_2022_Turinici_v1.pdf

Files (492.3 kB)

Name Size Download all
md5:ef0200222e8e81772c3a2c59730313a2
492.3 kB Preview Download