Variational Networks: An Optimal Control Approach to Early Stopping Variational Methods for Image Restoration

Alexander Effland*, Erich Kobler, Karl Kunisch, Thomas Pock

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


We investigate a well-known phenomenon of variational approaches in image processing, where typically the best image quality is achieved when the gradient flow process is stopped before converging to a stationary point. This paradox originates from a tradeoff between optimization and modeling errors of the underlying variational model and holds true even if deep learning methods are used to learn highly expressive regularizers from data. In this paper, we take advantage of this paradox and introduce an optimal stopping time into the gradient flow process, which in turn is learned from data by means of an optimal control approach. After a time discretization, we obtain variational networks, which can be interpreted as a particular type of recurrent neural networks. The learned variational networks achieve competitive results for image denoising and image deblurring on a standard benchmark data set. One of the key theoretical results is the development of first- and second-order conditions to verify optimal stopping time. A nonlinear spectral analysis of the gradient of the learned regularizer gives enlightening insights into the different regularization properties.

Original languageEnglish
Pages (from-to)396-416
Number of pages21
JournalJournal of Mathematical Imaging and Vision
Issue number3
Publication statusPublished - 1 Apr 2020


  • Deep learning
  • Early stopping
  • Gradient flow
  • Optimal control theory
  • Variational networks
  • Variational problems

ASJC Scopus subject areas

  • Condensed Matter Physics
  • Applied Mathematics
  • Geometry and Topology
  • Computer Vision and Pattern Recognition
  • Statistics and Probability
  • Modelling and Simulation

Cite this