ACCELERATED FORWARD-BACKWARD OPTIMIZATION USING DEEP LEARNING

Sebastian Banert, Jevgenija Rudzusika, Ozan Öktem, Jonas Adler

Forskningsoutput: TidskriftsbidragArtikel i vetenskaplig tidskriftPeer review

Sammanfattning

We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update within a given space. Finally, we show that the method is applicable to several cases of smooth and nonsmooth optimization and show superior results to established accelerated solvers.

Originalspråkengelska
Sidor (från-till)1236-1263
Antal sidor28
TidskriftSIAM Journal on Optimization
Volym34
Nummer2
DOI
StatusPublished - 2024

Ämnesklassifikation (UKÄ)

  • Beräkningsmatematik

Fingeravtryck

Utforska forskningsämnen för ”ACCELERATED FORWARD-BACKWARD OPTIMIZATION USING DEEP LEARNING”. Tillsammans bildar de ett unikt fingeravtryck.

Citera det här