Semantic Segmentation for 3D Localization in Urban Environments

Anil Armagan, Martin Hirzer, Vincent Lepetit

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

We show how to use simple 2.5D maps of buildings and recent advances in image segmentation and machine learning to geo-localize an input image of an urban scene: We first extract the façades of the buildings and their edges from the image, and then look for the orientation and location that align a 3D rendering of the map with these segments. We discuss how to use a 3D tracking system to acquire the data required for training the segmentation method, the segmentation itself, and how we use the segmentations to evaluate the quality of the alignment.
Original languageEnglish
Title of host publicationProceedings of the Joint Urban Remote Sensing Event (JURSE)
Publication statusPublished - 2017
EventJoint Urban Remote Sensing Event 2017 - Dubai, United Arab Emirates
Duration: 6 Mar 20178 Mar 2017

Conference

ConferenceJoint Urban Remote Sensing Event 2017
Abbreviated titleJURSE 2017
Country/TerritoryUnited Arab Emirates
CityDubai
Period6/03/178/03/17

Fingerprint

Dive into the research topics of 'Semantic Segmentation for 3D Localization in Urban Environments'. Together they form a unique fingerprint.
  • Best Paper Award at JURSE 2017

    Armagan, Anil (Recipient), Hirzer, Martin (Recipient) & Lepetit, Vincent (Recipient), 8 Mar 2017

    Prize: Prizes / Medals / Awards

Cite this