3D Pose Estimation from Color Images without Manual Annotations

Mahdi Rad, Markus Oberweger, Vincent Lepetit

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


3D pose estimation is an important problem with many potential applications. However, 3D acquiring annotations for color images is a difficult task. To create training data, the annotating is usually done with the help of markers or a robotic system, which in both cases is very cumbersome, expensive, or sometimes even impossible, especially from color images. Another option is to use synthetic images for training. However, synthetic images do not resemble real images exactly. To bridge this domain gap, Generative Adversarial Networks or transfer learning techniques can be used but, they require some annotated real images to learn the domain transfer. To overcome these problems, we propose a novel approach in this paper. Section II gives a short summary of our approach that uses synthetic data only, and Section III shows some results.
Original languageEnglish
Title of host publicationProceedings of the joint OAGM & ARW Workshop 2019
EditorsAndreas Pichler, Peter M. Roth, Robert Slabatnig, Gernot Stübl
Place of PublicationGraz
PublisherVerlag der Technischen Universität Graz
Number of pages1
ISBN (Electronic)9783851256635
Publication statusPublished - 2019
EventOAGM/ARW 2019: ARW & OAGM Workshop 2019 - Steyr, Austria
Duration: 9 May 201910 May 2019


ConferenceOAGM/ARW 2019
OtherAustrian Robotics Workshop and OAGM Workshop 2019


  • 3D Object Pose Estimatio
  • 3D Hand Pose Estimation
  • Domain Transfer

Cite this