A generalized distributed accelerated gradient method for distributed model predictive control with iteration complexity bounds

Forskningsoutput: Kapitel i bok/rapport/Conference proceedingKonferenspaper i proceedingPeer review

Sammanfattning

Most distributed optimization methods used for distributed model predictive control (DMPC) are gradient based. Gradient based optimization algorithms are known to have iterations of low complexity. However, the number of iterations needed to achieve satisfactory accuracy might be significant. This is not a desirable characteristic for distributed optimization in distributed model predictive control. Rather, the number of iterations should be kept low to reduce communication requirements, while the complexity within an iteration can be significant. By incorporating Hessian information in a distributed accelerated gradient method in a well-defined manner, we are able to significantly reduce the number of iterations needed to achieve satisfactory accuracy in the solutions, compared to distributed methods that are strictly gradient-based. Further, we provide convergence rate results and iteration complexity bounds for the developed algorithm.
Originalspråkengelska
Titel på värdpublikation[Host publication title missing]
FörlagIEEE - Institute of Electrical and Electronics Engineers Inc.
Sidor327-333
StatusPublished - 2013
EvenemangAmerican Control Conference, 2013 - Washington, D.C., Washington, DC , USA
Varaktighet: 2013 juni 172016 juni 19

Publikationsserier

Namn
ISSN (tryckt)0743-1619

Konferens

KonferensAmerican Control Conference, 2013
Land/TerritoriumUSA
OrtWashington, DC
Period2013/06/172016/06/19

Ämnesklassifikation (UKÄ)

  • Reglerteknik

Fingeravtryck

Utforska forskningsämnen för ”A generalized distributed accelerated gradient method for distributed model predictive control with iteration complexity bounds”. Tillsammans bildar de ett unikt fingeravtryck.

Citera det här