Abstract
While 3D object detection in LiDAR point clouds is well-established in academia and industry, the explainability of these models is a largely unexplored field. In this paper, we propose a method to generate attribution maps for the detected objects in order to better understand the behavior of such models. These maps indicate the importance of each 3D point in predicting the specific objects. Our method works with black-box models: We do not require any prior knowledge of the architecture nor access to the model’s internals, like parameters, activations or gradients. Our efficient perturbation-based approach empirically estimates the importance of each point by testing the model with randomly generated subsets of the input point cloud. Our sub-sampling strategy takes into account the special characteristics of LiDAR data, such as the depth-dependent point density. We show a detailed evaluation of the attribution maps and demonstrate that they are interpretable and highly informative. Furthermore, we compare the attribution maps of recent 3D object detection architectures to provide insights into their decision-making processes.
Originalsprache | englisch |
---|---|
Titel | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition |
Seiten | 1131-1140 |
Seitenumfang | 10 |
ISBN (elektronisch) | 9781665469463 |
DOIs | |
Publikationsstatus | Veröffentlicht - 2022 |
Veranstaltung | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition: CVPR 2022 - New Orleans Ernest N. Morial Convention Center, Hybrider Event, New Orleans, USA / Vereinigte Staaten Dauer: 21 Juni 2022 → 24 Sept. 2022 Konferenznummer: 2022 |
Konferenz
Konferenz | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
---|---|
Kurztitel | CVPR 2022 |
Land/Gebiet | USA / Vereinigte Staaten |
Ort | Hybrider Event, New Orleans |
Zeitraum | 21/06/22 → 24/09/22 |
ASJC Scopus subject areas
- Software
- Maschinelles Sehen und Mustererkennung