CycDA: Unsupervised Cycle Domain Adaptation to Learn from Image to Video

Wei Lin, Anna Kukleva, Kunyang Sun, Horst Possegger, Hilde Kuehne, Horst Bischof

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

Although action recognition has achieved impressive results over recent years, both collection and annotation of video training data are still time-consuming and cost intensive. Therefore, image-to-video adaptation has been proposed to exploit labeling-free web image source for adapting on unlabeled target videos. This poses two major challenges: (1) spatial domain shift between web images and video frames; (2) modality gap between image and video data. To address these challenges, we propose Cycle Domain Adaptation (CycDA), a cycle-based approach for unsupervised image-to-video domain adaptation by leveraging the joint spatial information in images and videos on the one hand and, on the other hand, training an independent spatio-temporal model to bridge the modality gap. We alternate between the spatial and spatio-temporal learning with knowledge transfer between the two in each cycle. We evaluate our approach on benchmark datasets for image-to-video as well as for mixed-source domain adaptation achieving state-of-the-art results and demonstrating the benefits of our cyclic adaptation.
Originalspracheenglisch
TitelECCV
Seitenumfang17
PublikationsstatusAngenommen/In Druck - 2022
Veranstaltung2022 European Conference on Computer Vision: ECCV 2022 - Hybrider Event, Tel Aviv, Israel
Dauer: 23 Okt. 202227 Okt. 2022

Konferenz

Konferenz2022 European Conference on Computer Vision
KurztitelECCV 2022
Land/GebietIsrael
OrtHybrider Event, Tel Aviv
Zeitraum23/10/2227/10/22

Fingerprint

Untersuchen Sie die Forschungsthemen von „CycDA: Unsupervised Cycle Domain Adaptation to Learn from Image to Video“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren