Abstract
Understanding actions of other agents increases the efficiency of autonomous mobile robots (AMRs) since they encompass intention and indicate future movements. We propose a new method that allows us to infer vehicle actions using a shallow image-based classification model. The actions are classified via bird's-eye view scene crops, where we project the detections of a 3D object detection model onto a context map. We learn map context information and aggregate temporal sequence information without requiring object tracking. This results in a highly efficient classification model that can easily be deployed on embedded AMR hardware. To evaluate our approach, we create new large-scale synthetic datasets showing warehouse traffic based on real vehicle models and geometry.
Original language | English |
---|---|
Title of host publication | 2024 IEEE International Conference on Robotics and Automation (ICRA) |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 10757-10763 |
Number of pages | 7 |
ISBN (Electronic) | 9798350384574 |
DOIs | |
Publication status | Published - 8 Aug 2024 |
Event | 2024 IEEE International Conference on Robotics and Automation: ICRA 2024 - Yokohama, Japan Duration: 13 May 2024 → 17 May 2024 |
Conference
Conference | 2024 IEEE International Conference on Robotics and Automation |
---|---|
Country/Territory | Japan |
City | Yokohama |
Period | 13/05/24 → 17/05/24 |
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Electrical and Electronic Engineering
- Artificial Intelligence