Abstract
In this paper we propose an efficient method to calculate a high-quality depth map from a single raw image captured by a light field or plenoptic camera. The proposed model combines the main idea of Active Wavefront Sampling (AWS) with the light field technique, i.e. we extract so-called sub-aperture images out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view. By tracking an imaged scene point over a sequence of sub-aperture images corresponding to a common circle, one can observe a virtual rotation of the scene point on the image plane. Our model is able to measure a dense field of these rotations, which are inversely related to the scene depth.
Originalsprache | englisch |
---|---|
Titel | Energy Minimization Methods in Computer Vision and Pattern Recognition |
Untertitel | 9th International Conference, EMMCVPR 2013, Lund, Sweden, August 19-21, 2013. Proceedings |
Herausgeber (Verlag) | Springer Berlin - Heidelberg |
Seiten | 66-79 |
Band | 8081 |
ISBN (elektronisch) | 978-3-642-40395-8 |
ISBN (Print) | 978-3-642-40394-1 |
DOIs | |
Publikationsstatus | Angenommen/In Druck - 2013 |