High quality 3D content generation requires high quality depth maps. In practice, depth maps generated by stereo-matching, depth sensingcameras, or decoders, have a low resolution and suffer from unreliable estimates and noise. Therefore depth post-processing is necessary. In this paper we benchmark state-of-the-art filter based depth upsampling methods on depth accuracy and interpolation quality by conducting a parameter space search to find the optimum set of parameters for various upscale factors and noise levels. Additionally, we analyze each method’s computational complexity with the big O notation and we measure the runtime of the GPU implementation that we built for each method.
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus
can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach
to propagate focus to non-edge image portions, for single image focus editing. While their approach produces
accurate dense blur maps, the computational complexity and memory requirements for solving the resulting
sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within
the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast,
efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid
or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for
false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect
focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to
faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set
of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as
optimization based approaches, and that facial blur compensation results in a significant improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.