Fixing and extending some recent results on the ADMM algorithm

Research output: Contribution to journalArticle

Abstract

We investigate the techniques and ideas used in Shefi and Teboulle (SIAM J Optim 24(1), 269–297, 2014) in the convergence analysis of two proximal ADMM algorithms for solving convex optimization problems involving compositions with linear operators. Besides this, we formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient. Moreover, in each iteration, we allow the use of variable metrics, while the investigations are carried out in the setting of infinite-dimensional Hilbert spaces. This algorithmic scheme is investigated from the point of view of its convergence properties.

Details

Authors
Organisations
External organisations
  • University of Vienna
Research areas and keywords

Subject classification (UKÄ) – MANDATORY

  • Computational Mathematics
  • Control Engineering

Keywords

  • ADMM algorithm, Lagrangian, Positive semidefinite operators, Saddle points, Variable metrics
Original languageEnglish
Pages (from-to)1303-1325
JournalNumerical Algorithms
Volume86
Issue number3
Early online date2020 May 14
Publication statusPublished - 2021
Publication categoryResearch
Peer-reviewedYes