Sub-linear convergence of a tamed stochastic optimization method

Monika Eisenmann @ (Lund University), Tony Stillfjord

R 3.28 Wed Z2 11:10-11:20

In order to solve a minimization problem, a possible approach is to find the steady state of the corresponding gradient flow initial value problem through a long time integration. The well-known stochastic gradient descent (SGD) method then corresponds to the forward Euler scheme with a stochastic approximation of the gradient. Our goal is to find more suitable schemes that work well in the stochastic setting.

In the talk, we present a stochastic version of the tamed Euler scheme in this context. This method is fully explicit but is more stable for larger step sizes compared to the standard SGD method. We provide convergence results with a sub-linear rate also in an infinite-dimensional setting. We will illustrate the theoretical results on numerical examples. A typical application for such optimization problems is supervised learning.

pdf version