Adaptive FISTA for nonconvex optimization

Peter Ochs, Thomas Pock

Research output: Contribution to journalArticlepeer-review


In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA); however, we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quasi-Newton methods. Convergence is proved in a general nonconvex setting, and hence, as a byproduct, we also obtain new convergence guarantees for proximal quasi-Newton methods. The efficiency of the new method is shown in numerical experiments on a sparsity regularized nonlinear inverse problem.

Original languageEnglish
Pages (from-to)2482-2503
Number of pages22
JournalSIAM Journal on Optimization
Issue number4
Publication statusPublished - 1 Jan 2019


  • Proximal algorithm
  • Proximal quasi-Newton
  • SR1 quasi-Newton

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science


Dive into the research topics of 'Adaptive FISTA for nonconvex optimization'. Together they form a unique fingerprint.

Cite this