Learned Variational Video Color Propagation

Markus Hofinger*, Erich Kobler, Alexander Effland, Thomas Pock

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

In this paper, we propose a novel method for color propagation that is used to recolor gray-scale videos (e.g. historic movies). Our energy-based model combines deep learning with a variational formulation. At its core, the method optimizes over a set of plausible color candidates that are extracted from motion and semantic feature matches, together with a learned regularizer that resolves color ambiguities by enforcing spatial smoothness.
Our approach allows to interpret intermediate results and to incorporate extensions like using multiple reference frames even after training.
We achieve state-of-the-art results on a number of standard benchmark datasets with multiple metrics and also provide convincing results on real historical videos - even though such types of video are not present during training.
Moreover, a user evaluation shows that our method propagates initial colors more faithfully and temporally consistent.
Original languageEnglish
Title of host publicationComputer Vision – ECCV 2022
Place of PublicationCham
PublisherSpringer
Pages512-530
Number of pages8
DOIs
Publication statusPublished - 2022
Event2022 European Conference on Computer Vision: ECCV 2022 - Hybrider Event, Tel Aviv, Israel
Duration: 23 Oct 202227 Oct 2022

Publication series

NameLecture Notes in Computer Science
Volume13683

Conference

Conference2022 European Conference on Computer Vision
Abbreviated titleECCV 2022
Country/TerritoryIsrael
CityHybrider Event, Tel Aviv
Period23/10/2227/10/22

Fingerprint

Dive into the research topics of 'Learned Variational Video Color Propagation'. Together they form a unique fingerprint.

Cite this