Abstract
We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update within a given space. Finally, we show that the method is applicable to several cases of smooth and nonsmooth optimization and show superior results to established accelerated solvers.
Original language | English |
---|---|
Pages (from-to) | 1236-1263 |
Number of pages | 28 |
Journal | SIAM Journal on Optimization |
Volume | 34 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2024 |
Subject classification (UKÄ)
- Computational Mathematics
Free keywords
- convex optimization
- deep learning
- inverse problems
- proximal-gradient algorithm