Depth prediction is essential for three-dimensional optical displays. The accuracy of the depth map influences the quality of virtual viewpoint synthesis. Due to the relatively simple end-to-end structures of CNNs, the performance for poor and repetitive texture is barely satisfactory. In consideration of the shortage of existing network structures, the two main structures are proposed to optimize the depth map. (i) Inspired by GoogLeNet, the inception module is added at the beginning of the network. (ii) Assuming that the disparity map has only horizontal disparity, two sizes of rectangular convolution kernels are introduced to the network structure. Experimental results demonstrate that our structures of the CNN reduce the error rate from 19.23% to 14.08%.
Light field display requires a large number of views to achieve an ideal three-dimensional display. Techniques have been proposed for generating virtual views between cameras, using depth information and feature matching between multiple images. However, these methods cannot generate views in front of the cameras and behind the cameras to implement free-view walkthrough for the light filed display. Here a simple and robust method is presented to synthetize virtual views. The key to this technique lies in interpreting the input images as a 4D optical function - the light filed, and new views are generated in real time by ray tracing in appropriate directions. The 4D optical function completely describes the state of the flow of light in an unobstructed space. Once a light field is created, new views of arbitrary camera positions can be constructed by combining and resampling the pre-acquired images. The pixel information constructing a new view can be obtained through the interpolation algorithm, and the weighting factor varies with the position of corresponding pixels based on ray tracing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.