ACCELERATED FORWARD-BACKWARD OPTIMIZATION USING DEEP LEARNING

Sebastian Banert, Jevgenija Rudzusika, Ozan Öktem, Jonas Adler

Research output: Contribution to journalArticlepeer-review

Abstract

We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update within a given space. Finally, we show that the method is applicable to several cases of smooth and nonsmooth optimization and show superior results to established accelerated solvers.

Original languageEnglish
Pages (from-to)1236-1263
Number of pages28
JournalSIAM Journal on Optimization
Volume34
Issue number2
DOIs
Publication statusPublished - 2024

Subject classification (UKÄ)

  • Computational Mathematics

Free keywords

  • convex optimization
  • deep learning
  • inverse problems
  • proximal-gradient algorithm

Fingerprint

Dive into the research topics of 'ACCELERATED FORWARD-BACKWARD OPTIMIZATION USING DEEP LEARNING'. Together they form a unique fingerprint.

Cite this