Robust Edge-based Visual Odometry using Machine-Learned Edges

Fabian Schenk, Friedrich Fraundorfer

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review


In this work, we present a real-time robust edge-based visual odometry framework for RGBD sensors (REVO). Even though our method is independent of the edge detection algorithm, we show that the use of state-of-the-art machine-learned edges gives significant improvements in terms of robustness and accuracy compared to standard edge detection methods. In contrast to approaches that heavily rely on the photo-consistency assumption, edges are less influenced by lighting changes and the sparse edge representation offers a larger convergence basin while the pose estimates are also very fast to compute. Further, we introduce a measure for tracking quality, which we use to determine when to insert a new key frame. We show the feasibility of our system on real-world datasets and extensively evaluate on standard benchmark sequences to demonstrate the performance in a wide variety of scenes and camera motions. Our framework runs in real-time on the CPU of a laptop computer and is available online.
Original languageEnglish
Title of host publicationProceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS)
PublisherInstitute of Electrical and Electronics Engineers
Number of pages8
ISBN (Electronic)978-1-5386-2682-5
Publication statusPublished - 2017
EventInternational Conference on Intelligent Robots and Systems 2017 - Vancouver, Canada
Duration: 24 Sept 201728 Sept 2017


ConferenceInternational Conference on Intelligent Robots and Systems 2017
Abbreviated titleIEEE/RSJ


Dive into the research topics of 'Robust Edge-based Visual Odometry using Machine-Learned Edges'. Together they form a unique fingerprint.

Cite this