ATLAS-MVSNet: Attention Layers for Feature Extraction and Cost Volume Regularization in Multi-View Stereo

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

We present ATLAS-MVSNet, an end-to-end deep learning architecture relying on local attention layers for depth map inference from multi-view images. Distinct from existing works, we introduce a novel module design for neural networks, which we termed hybrid attention block, that utilizes the latest insights into attention in vision models. We are able to reap the benefits of attention in both, the carefully designed multi-stage feature extraction network and the cost volume regularization network. Our new approach displays significant improvement over its counterpart based purely on convolutions. While many state-of-the-art methods need multiple high-end GPUs in the training phase, we are able to train our network on a single consumer grade GPU. ATLAS-MVSNet exhibits excellent performance, especially in terms of accuracy, on the DTU dataset. Furthermore, ATLAS-MVSNet ranks amongst the top published methods on the online Tanks and Temples benchmark.
Originalspracheenglisch
Titel2022 26th International Conference on Pattern Recognition, ICPR 2022
Herausgeber (Verlag)ACM/IEEE
Seiten3557-3563
Seitenumfang7
ISBN (elektronisch)9781665490627
ISBN (Print)978-1-6654-9063-4
DOIs
PublikationsstatusVeröffentlicht - 25 Aug. 2022
Veranstaltung26th International Conference on Pattern Recognition: ICPR 2022 - Montreal, Kanada
Dauer: 21 Aug. 202225 Aug. 2022

Konferenz

Konferenz26th International Conference on Pattern Recognition
KurztitelICPR 2022
Land/GebietKanada
OrtMontreal
Zeitraum21/08/2225/08/22

ASJC Scopus subject areas

  • Maschinelles Sehen und Mustererkennung

Fingerprint

Untersuchen Sie die Forschungsthemen von „ATLAS-MVSNet: Attention Layers for Feature Extraction and Cost Volume Regularization in Multi-View Stereo“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren