Abstract
Understanding actions of other agents increases the efficiency of autonomous mobile robots (AMRs) since they encompass intention and indicate future movements. We propose a new method that allows us to infer vehicle actions using a shallow image-based classification model. The actions are classified via bird's-eye view scene crops, where we project the detections of a 3D object detection model onto a context map. We learn map context information and aggregate temporal sequence information without requiring object tracking. This results in a highly efficient classification model that can easily be deployed on embedded AMR hardware. To evaluate our approach, we create new large-scale synthetic datasets showing warehouse traffic based on real vehicle models and geometry.
Originalsprache | englisch |
---|---|
Titel | 2024 IEEE International Conference on Robotics and Automation (ICRA) |
Herausgeber (Verlag) | IEEE |
Seiten | 10757-10763 |
Seitenumfang | 7 |
ISBN (elektronisch) | 9798350384574 |
DOIs | |
Publikationsstatus | Veröffentlicht - 8 Aug. 2024 |
Veranstaltung | 2024 IEEE International Conference on Robotics and Automation: ICRA 2024 - Yokohama, Japan Dauer: 13 Mai 2024 → 17 Mai 2024 |
Konferenz
Konferenz | 2024 IEEE International Conference on Robotics and Automation |
---|---|
Land/Gebiet | Japan |
Ort | Yokohama |
Zeitraum | 13/05/24 → 17/05/24 |
ASJC Scopus subject areas
- Software
- Steuerungs- und Systemtechnik
- Elektrotechnik und Elektronik
- Artificial intelligence