Convergence and stability analysis of stochastic optimization algorithms

Research output: ThesisLicentiate Thesis

142 Downloads (Pure)

Abstract

This thesis is concerned with stochastic optimization methods. The pioneering work in the field is the article “A stochastic approximation algorithm” by Robbins and Monro [1], in which they proposed the stochastic gradient descent; a stochastic version of the classical gradient descent algorithm. Since then, many improvements and extensions of the theory have been published, as well as new versions of the original algorithm. Despite this, a problem that many stochastic algorithms still share, is the sensitivity to the choice of the step size/learning rate. One can view the stochastic gradient descent algorithm as a stochastic version of the explicit Euler scheme applied to the gradient flow equation. There are other schemes for solving differential equations numerically that allow for larger step sizes. In this thesis, we investigate the properties of some of these methods, and how they perform, when applied to stochastic optimization problems.
Original languageEnglish
QualificationLicentiate
Awarding Institution
  • Centre for Mathematical Sciences
Supervisors/Advisors
  • Stillfjord, Tony, Supervisor
Award date2023 Mar 14
Publisher
ISBN (Print)978-91-8039-558-8
ISBN (electronic) 978-91-8039-559-5
Publication statusPublished - 2023 Feb 21

Subject classification (UKÄ)

  • Computational Mathematics

Free keywords

  • numerical analysis
  • optimization
  • stochastic optimization
  • machine learning

Fingerprint

Dive into the research topics of 'Convergence and stability analysis of stochastic optimization algorithms'. Together they form a unique fingerprint.

Cite this