FWF - V-MAV - V-MAV: Cooperative micro aerial vehicles using onboard visual sensors

  • Holzmann, Thomas (Co-Investigator (CoI))
  • Bischof, Horst (Co-Investigator (CoI))

Project: Research project

Project Details


The overall aim of the project is to advance the capabilities of visual controlled MAVs in the areas of flight behavior and autonomy, cooperative operation, cognitive abilities and in addition to decrease the size of such an MAV. Advances in these areas would enable new fields of applications for MAVs and path the way to further research topics in mobile robotics. The proposed research proposal is structured into three work packages: 1. Visual-inertial MAV pose estimation and localization using multi-camera systems 2. Embedded vision algorithms for dynamic flight of small scale MAVs 3. Methods for cooperative visual localization and semantic mapping Work package 1 will investigate the suitability of multi-camera systems for 6DOF pose estimation and localization for MAVs performing dynamic maneuvers. This will include the development of visual-inertial pose estimation algorithms exploiting the advantages of multi-camera system geometries. Work package 2 will investigate embedded computer vision algorithms to facilitate dynamic control and flight as well as a further miniaturization of MAVs. For this, specific components of the visual control system will be moved to dedicated embedded processors to achieve the necessary high-frame rates for dynamic flight. Work package 3 will investigate cooperative operation of MAVs focusing on cooperative visual localization, mapping and cognitive scene understanding and interpretation. In cooperative operation MAVs should be able to share their individual knowledge of the environment and incorporate knowledge of others with the effect of improving environment mapping and the self localization process. An important part of this research package is cognitive scene understanding. The MAVs should make use of object detection and classification methods to generate a semantic description of the environment to produce a semantically annotated 3D environment map and also to use this meta-information to improve the mapping process (e.g. adapt parameters based on the semantics) or the localization process. The proposed project will combine the competences of the three involved partners, ETHZ, TUM and TUG. All the three partners have year-long experience in vision controlled MAV through various projects and performed ground breaking work in this area. The common project will ensure the utilization of the combined expertise of the partners.
Effective start/end date1/08/1431/01/18


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.