Projekt per år
Sammanfattning
Optimization problems occur in many areas in science and engineering. When the optimization problem at hand is of largescale, the computational cost of the optimization algorithm is a main concern. Firstorder optimization algorithms—in which updates are performed using only gradient or subgradient of the objective function—have low periteration computational cost, which make them suitable for tackling largescale optimization problems. Even though the periteration computational cost of these methods is reasonably low, the number of iterations needed for finding a solution—especially if medium or high accuracy is needed—can in practice be very high; as a result, the overall computational cost of using these methods would still be high.
This thesis focuses on one of the most widely used firstorder optimization algorithms, namely, the forward–backward splitting algorithm, and attempts to improve its performance. To that end, this thesis proposes novel firstorder optimization algorithms which all are built upon the forward–backward method. An important feature of the proposed methods is their flexibility. Using the flexibility of the proposed algorithms along with the safeguarding notion, this thesis provides a framework through which many new and efficient optimization algorithms can be developed.
To improve efficiency of the forward–backward algorithm, two main approaches are taken in this thesis. In the first one, a technique is proposed to adjust the point at which the forward–backward operator is evaluated. This is done through including additive terms—which are called deviations—in the input argument of the forward– backward operator. The deviations then, in order to have a convergent algorithm, have to satisfy a safeguard condition at each iteration. Incorporating deviations provides great flexibility to the algorithm and paves the way for designing new and improved forward–backwardbased methods. A few instances of employing this flexibility to derive new algorithms are presented in the thesis.
In the second proposed approach, a globally (and potentially slow) convergent algorithm can be combined with a fast and locally convergent one to form an efficient optimization scheme. The role of the globally convergent method is to ensure convergence of the overall scheme. The fast local algorithm’s role is to speed up the convergence; this is done by switching from the globally convergent algorithm to the local one whenever it is safe, i.e., when a safeguard condition is satisfied. This approach, which allows for combining different global and local algorithms within its framework, can result in fast and globally convergent optimization schemes.
This thesis focuses on one of the most widely used firstorder optimization algorithms, namely, the forward–backward splitting algorithm, and attempts to improve its performance. To that end, this thesis proposes novel firstorder optimization algorithms which all are built upon the forward–backward method. An important feature of the proposed methods is their flexibility. Using the flexibility of the proposed algorithms along with the safeguarding notion, this thesis provides a framework through which many new and efficient optimization algorithms can be developed.
To improve efficiency of the forward–backward algorithm, two main approaches are taken in this thesis. In the first one, a technique is proposed to adjust the point at which the forward–backward operator is evaluated. This is done through including additive terms—which are called deviations—in the input argument of the forward– backward operator. The deviations then, in order to have a convergent algorithm, have to satisfy a safeguard condition at each iteration. Incorporating deviations provides great flexibility to the algorithm and paves the way for designing new and improved forward–backwardbased methods. A few instances of employing this flexibility to derive new algorithms are presented in the thesis.
In the second proposed approach, a globally (and potentially slow) convergent algorithm can be combined with a fast and locally convergent one to form an efficient optimization scheme. The role of the globally convergent method is to ensure convergence of the overall scheme. The fast local algorithm’s role is to speed up the convergence; this is done by switching from the globally convergent algorithm to the local one whenever it is safe, i.e., when a safeguard condition is satisfied. This approach, which allows for combining different global and local algorithms within its framework, can result in fast and globally convergent optimization schemes.
Originalspråk  engelska 

Kvalifikation  Doktor 
Tilldelande institution 

Handledare 

Sponsorer för avhandling  
Tilldelningsdatum  2022 dec. 15 
Förlag  
ISBN (tryckt)  9789180394680 
ISBN (elektroniskt)  9789180394673 
Status  Published  2022 nov. 21 
Bibliografisk information
Defence detailsDate: 20221215
Time: 10:15
Place: Lecture hall C, building KC4, Naturvetarvägen 18, Lund. Zoom: https://luse.zoom.us/j/62525384194
External reviewer(s)
Name: Dirk Lorenz
Title: Professor
Affiliation: Technical University of Braunschweig, Germany.
Ämnesklassifikation (UKÄ)
 Reglerteknik
 Matematisk analys
 Beräkningsmatematik
Fingeravtryck
Utforska forskningsämnen för ”Efficient and Flexible FirstOrder Optimization Algorithms”. Tillsammans bildar de ett unikt fingeravtryck.Projekt
 2 Avslutade

LargeScale Optimization
Giselsson, P., Fält, M., Sadeghi, H., Morin, M., Banert, S., Upadhyaya, M., Nilsson, M. & Åkerman, A.
2018/01/01 → 2022/12/31
Projekt: Forskning

On acceleration of firstorder convex optimization methods
2016/08/01 → 2022/12/31
Projekt: Avhandling