Fixing and extending some recent results on the ADMM algorithm

Sebastian Banert, Radu Ioan Boţ, Ernö Robert Csetnek

Research output: Contribution to journalArticlepeer-review

Abstract

We investigate the techniques and ideas used in Shefi and Teboulle (SIAM J Optim 24(1), 269–297, 2014) in the convergence analysis of two proximal ADMM algorithms for solving convex optimization problems involving compositions with linear operators. Besides this, we formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient. Moreover, in each iteration, we allow the use of variable metrics, while the investigations are carried out in the setting of infinite-dimensional Hilbert spaces. This algorithmic scheme is investigated from the point of view of its convergence properties.

Original languageEnglish
Pages (from-to)1303-1325
JournalNumerical Algorithms
Volume86
Issue number3
Early online date2020 May 14
DOIs
Publication statusPublished - 2021

Subject classification (UKÄ)

  • Computational Mathematics
  • Control Engineering

Free keywords

  • ADMM algorithm
  • Lagrangian
  • Positive semidefinite operators
  • Saddle points
  • Variable metrics

Fingerprint

Dive into the research topics of 'Fixing and extending some recent results on the ADMM algorithm'. Together they form a unique fingerprint.

Cite this