Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization

Anil Armagan, Martin Hirzer, Peter M. Roth, Vincent Lepetit

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

We present an efficient method for geolocalization in urban environments starting from a coarse estimate of the location provided by a GPS and using a simple untextured 2.5D model of the surrounding buildings. Our key contribution is a novel efficient and robust method to optimize the pose: We train a Deep Network to predict the best direction to improve a pose estimate, given a semantic segmentation of the input image and a rendering of the buildings from this estimate. We then iteratively apply this CNN until converging to a good pose. This approach avoids the use of reference images of the surroundings, which are difficult to acquire and match, while 2.5D models are broadly available. We can therefore apply it to places unseen during training.
Original languageEnglish
Title of host publicationProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Publication statusPublished - 2017
Event2017 IEEE Conference on Computer Vision and Pattern Recognition: CVPR 2017 - Honolulu, United States
Duration: 21 Jul 201726 Jul 2017

Conference

Conference2017 IEEE Conference on Computer Vision and Pattern Recognition
Abbreviated titleCVPR 2017
Country/TerritoryUnited States
CityHonolulu
Period21/07/1726/07/17

Fingerprint

Dive into the research topics of 'Learning to Align Semantic Segmentation and 2.5D Maps for Geolocalization'. Together they form a unique fingerprint.

Cite this