In this paper, we proposed parallel processing method of 2 step wave field projection method using GPU. In the first step, 2D projection of wave field for 3D object is calculated by radial symmetric interpolation (RSI) method to the reference depth, and then in step 2 it is translated toward depth direction using Fresnel transformation. In each step, the object points are divided into small groups and processed in each CUDA cores in parallel. Experimental results show that proposed method is 5901 times faster than Rayleigh-Sommerfeld method for 1 million object points and full HD SLM resolution.
In this paper, we present a fast hologram pattern generation method to overcome accumulation problem of point source
based method. Proposed method consists of two steps. In the first step, 2D projection of wave field for 3D object is
calculated by radial symmetric interpolation (RSI) method to the multiple reference depth planes. Then in the second
step, each 2D wave field is translated toward SLM plane by FFT based algorithm. Final hologram pattern is obtained by
adding them. The effectiveness of method is proved by computer simulation and optical experiment. Experimental
results show that proposed method is 3878 times faster than analytic method, and 226 times faster than RSI method.
KEYWORDS: Video, Visualization, Video processing, Video coding, Volume rendering, Computer programming, 3D modeling, 3D displays, 3D video compression, Data communications
A depth dilation filter is proposed for free viewpoint video system based on mixed resolution multi-view video plus depth (MVD). By applying gray scale dilation filter to depth images, foreground regions are extended to background region, and synthesis artifacts occur out of boundary edge. Thus, objective and subjective quality of view synthesis result is improved. A depth dilation filter is applied to inloop resampling part in encoding/decoding, and post processing part after decoding. Accurate view synthesis is important in virtual view generation for autostereoscopic display, moreover there are many coding tools which use view synthesis to reduce interview redundancy in 3D video coding such as view synthesis prediction (VSP) and depth based motion vector prediction (DMVP), and compression efficiency can be improved by accurate view synthesis. Coding and synthesis experiments are performed for performance evaluation of a dilation filter with MPEG test sequences. Dilation filter was implemented on the top of the MPEG reference software for AVC based 3D video coding. By applying a depth dilation filter, BD-rate gains of 0.5% and 6.0% in terms of PSNR of decoded views and synthesized views, respectively.
In this paper, we present a fast hologram pattern generation method by radial symmetric interpolation. In spatial domain,
concentric redundancy of each point hologram is removed by substituting the calculation of wave propagation with
interpolation and duplication. Also the background mask which represents stationary point in temporal domain is used to
remove temporal redundancy in hologram video. Frames are grouped in predefined time interval and each group shares
the background information, and hologram pattern of each time is updated only for the foreground part. The
effectiveness of proposed algorithm is proved by simulation and experiment.
KEYWORDS: Image resolution, Video, 3D displays, 3D video streaming, Image processing, Image quality, Video compression, Detection and tracking algorithms, Stereoscopic displays, Cameras
For a full motion parallax 3D display, it is necessary to supply multiple views obtained from a series of different
locations. However, it is impractical to deliver all of the required views because it will result in a huge size of
bit streams. In the previous work, authors proposed a mixed resolution 3D video format composed of color and
depth information pairs with heterogeneous resolutions, and also suggested a view synthesis algorithm for mixed
resolution videos. This paper reports a more refined view interpolation method and improved results.
KEYWORDS: Video, Video compression, 3D video compression, 3D video streaming, Data centers, Laser Doppler velocimetry, Image resolution, Video processing, Image quality, 3D displays
new 3D video format which consists of one full resolution mono video and half resolution left/right videos is proposed.
The proposed 3D video format can generate high quality virtual views from small amount of input data while preserving
the compatibility for legacy mono and frame compatible stereo video systems. The center view video is the same with
normal mono video data, but left/right views are frame compatible stereo video data. This format was tested in terms of
compression efficiency, rendering capability, and backward compatibility. Especially we compared view synthesis
quality when virtual views are made from full resolution two views or one original view and the other half resolution
view. For frame compatible stereo format, experiments were performed on interlaced method. The proposed format gives
BD bit-rate gains of 15%.
KEYWORDS: Cameras, Video, Video coding, Video compression, 3D vision, 3D video compression, Quality measurement, Scalable video coding, Image quality, Matrices
One of the critical issues to successful service of 3D video is how to compress huge amount of multi-view video data
efficiently. In this paper, we described about geometric prediction structure for multi-view video coding. By exploiting
the geometric relations between each camera pose, we can make prediction pair which maximizes the spatial correlation
of each view. To analyze the relationship of each camera pose, we defined the mathematical view center and view
distance in 3D space. We calculated virtual center pose by getting mean rotation matrix and mean translation vector. We
proposed an algorithm for establishing the geometric prediction structure based on view center and view distance. Using
this prediction structure, inter-view prediction is performed to camera pair of maximum spatial correlation. In our
prediction structure, we also considered the scalability in coding and transmitting the multi-view videos. Experiments are
done using JMVC (Joint Multiview Video Coding) software on MPEG-FTV test sequences. Overall performance of
proposed prediction structure is measured in the PSNR and subjective image quality measure such as PSPNR.
In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after
blending. Compared to usual matching problem, registration is more difficult when each image is captured under
different photographing conditions. In HDR imaging, we use long and short exposure images, which have different
brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy
image pair and the amount of motion blur varies from one image to another due to the different exposure times.
The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly
equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To
solve this problem, we applied probabilistic measure such as mutual information to represent similarity between
images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the
aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence
of luminance of mutual information, we proposed a fast and practically useful image registration technique in
multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over
90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The
effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring
cases using hand-held camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.