KEYWORDS: 3D displays, Eye, Lenses, Stereoscopic displays, 3D image enhancement, 3D image processing, Yield improvement, LCDs, Liquid crystals, Current controlled current source
Stereoscopic 3D (S3D) displays provide an enhanced sense of depth by sending different images to the two eyes. But these displays do not reproduce focus cues (blur and accommodation) correctly. Specifically, the eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes discomfort, reduces performance, and distorts 3D percepts. We developed two techniques designed to reduce vergence-accommodation conflicts and thereby improve comfort, performance, and perception. One uses focus-tunable lenses between the display and viewer’s eyes. Lens power is yoked to expected vergence distance creating a stimulus to accommodation that is consistent with the stimulus to vergence. This yoking should reduce the vergence-accommodation mismatch. The other technique uses a fixed lens before one eye and relies on binocularly fused percepts being determined by one eye and then the other, depending on simulated distance. This is meant to drive accommodation with one eye when simulated distance is far and with the other eye when simulated distance is near. We conducted performance tests and discomfort assessments with both techniques and with conventional S3D displays. We also measured accommodation. The focus-tunable technique, but not the fixed-lens technique, produced appropriate stimulus-driven accommodation thereby minimizing the vergence-accommodation conflict. Because of this, the tunable technique yielded clear improvements in comfort and performance while the fixed technique did not. The focus-tunable lens technique therefore offers a relatively easy means for reducing the vergence-accommodation conflict and thereby improving viewer experience.
Common stereoscopic 3D (S3D) displays utilize either spatial or temporal interlacing to send different images to each
eye. Temporal interlacing sends content to the left and right eyes alternatingly in time, and is prone to artifacts such as
flicker, unsmooth motion, and depth distortion. Spatial interlacing sends even pixel rows to one eye and odd rows to the
other eye, and has a lower effective spatial resolution than temporal interlacing unless the viewing distance is large. We
propose a spatiotemporal hybrid protocol that interlaces the left- and right-eye views spatially, but the rows
corresponding to each eye alternate every frame. We performed psychophysical experiments to compare this novel
stereoscopic display protocol to existing methods in terms of spatial and temporal properties. Using a haploscope to
simulate the three protocols, we determined perceptual thresholds for flicker, motion artifacts, and depth distortion, and
we measured the effective spatial resolution. Spatial resolution is improved, flicker and motion artifacts are reduced, and
depth distortion is eliminated. These results suggest that the hybrid protocol maintains the benefits of spatial and
temporal interlacing while eliminating the artifacts, thus creating a more realistic viewing experience.
Prolonged use of conventional stereo displays causes viewer discomfort and fatigue because of the vergenceaccommodation
conflict. We used a novel volumetric display to examine how viewing distance, the sign of the vergenceaccommodation
conflict, and the temporal properties of the conflict affect discomfort and fatigue. In the first experiment,
we presented a fixed conflict at short, medium, and long viewing distances. We compared subjects’ symptoms in that
condition and one in which there was no conflict. We observed more discomfort and fatigue with a given vergenceaccommodation conflict at the longer distances. The second experiment compared symptoms when the conflict had one sign compared to when it had the opposite sign at short, medium, and long distances. We observed greater symptoms with uncrossed disparities at long distances and with crossed disparities at short distances. The third experiment compared symptoms when the conflict changed rapidly as opposed to slowly. We observed more serious symptoms when the conflict changed rapidly. These findings help define comfortable viewing conditions for stereo displays.
In the conventional temporally interlaced S3D protocol, red, green, and blue are presented simultaneously to one eye and then to the other eye. Thus, images are presented in alternating fashion to the two eyes. Moving objects presented with this protocol are often perceived at incorrect depth relative to stationary parts of the scene. We implemented a colorinterlaced protocol that could in principle minimize or even eliminate such depth distortions. In this protocol, green is presented to one eye and red and blue to the other eye at the same time. Then red and blue are presented to the first eye and green to the second. Using a stereoscope, we emulated the color-interlaced protocol and measured the magnitude of depth distortions as a function of object speed. The results showed that color interlacing yields smaller depth distortions than temporal interlacing in most cases and never yields larger distortions. Indeed, when color interlacing produces no brightness change within sub-frames, the distortions are eliminated altogether. The results also show that the visual system’s calculation of depth from disparity is based on luminance, not chromatic signals. In conclusion, color interlacing provides great potential for improved stereo presentation.
The vergence-accommodation conflict associated with viewing stereoscopic 3D (S3D) content can cause visual
discomfort. Previous studies of vergence and accommodation have shown that the coupling between the two responses is
driven by a fast, phasic component. We investigated how the temporal properties of vergence-accommodation conflicts
affect discomfort. Using a unique volumetric display, we manipulated the stimulus to vergence and the stimulus to
accommodation independently. There were two experimental conditions: 1) natural viewing in which the stimulus to
vergence was perfectly correlated with the stimulus to accommodation; and 2) conflict viewing in which the stimulus to
vergence varied while the stimulus to accommodation remained constant (thereby mimicking S3D viewing). The
stimulus to vergence (and accommodation in natural viewing) varied at one of three temporal frequencies in those
conditions. The magnitude of the conflict was the same for all three frequencies. The young adult subjects reported more
visual discomfort when vergence changes were faster, particularly in the conflict condition. Thus, the temporal
properties of the vergence-accommodation conflict in S3D media affect visual discomfort. The results can help content
creators minimize discomfort by making conflict changes sufficiently slow.
Prolonged use of conventional stereo displays causes viewer discomfort and fatigue because of the vergenceaccommodation
conflict. We used a novel volumetric display to examine how viewing distance and the sign of the
vergence-accommodation conflict affect discomfort and fatigue. In the first experiment, we presented a fixed conflict at
short, medium, and long viewing distances. We compared subjects' symptoms in that condition and one in which there
was no conflict. We observed more discomfort and fatigue with a given vergence-accommodation conflict at the longer
distances. The second experiment compared symptoms when the conflict had one sign compared to when it had the
opposite sign at short, medium, and long distances. We observed greater symptoms with uncrossed disparities at long
distances and with crossed disparities at short distances. These findings help define comfortable viewing conditions for
stereo displays.
In this paper, a high-definition integral floating display is implemented. Integral floating display is composed of an
integral imaging system and a floating lens. The integral imaging system consists of a two-dimensional (2D) display and
a lens array. In this paper, we substituted multiple spatial light modulators (SLMs) for a 2D display to acquire higher definition. Unlike conventional integral floating display, there is space between displaying regions of SLMs. Therefore, SLMs should be carefully aligned to provide continuous viewing region and seamless image. The implementation of the system is explained and three-dimensional (3D) image displayed by the system is represented.
KEYWORDS: 3D image processing, Integral imaging, Image resolution, LCDs, 3D displays, Liquid crystals, 3D image reconstruction, Imaging arrays, 3D vision, Optical resolution
In this paper, we propose a resolution-enhanced integral imaging with pinhole arrays on liquid crystal (LC) panel. Since
light through a pinhole corresponds to a pixel in 3D image, we electrically move the pinhole arrays on LC panel fast
enough to make after-image effect and display corresponding elemental image synchronously without reducing the 3D
viewing aspect of the reconstructed image. The explanation of the proposed system will be provided and the
experimental results will also be presented.
In this paper, we propose an integral imaging with variable image planes using PDLC (polymer-dispersed liquid crystal) films. Parallel layered PDLC films and a projector are adopted as a display system and enable to vary the location of image plane. We can control the transparency of PDLC films electrically and make each film diffuse the projected light successively with different depth from lens array. The explanation of the proposed system is provided and the experimental results are also presented.
KEYWORDS: 3D image processing, 3D displays, Displays, Integral imaging, Lithium, Imaging systems, Geometrical optics, Mirrors, Stereoscopy, 3D image reconstruction
Integral floating imaging is a 3D display method which is a combination of integral imaging and floating display. It is a
promising technique of 3D display because it possesses full parallax, continuous viewpoints and can produce large feel
of depth. In this paper, we explain the principle of the integral floating 3D display system and analyze its viewing
characteristics, such as viewing angle, viewing window and the expressible depth range. We analyze these characteristics
by using geometrical optics based on the analyses on the integral imaging. Experimental results which verify the
analyses are provided.
The 3D-2D convertibility is essential for the penetration of the 3D display into the current 2D display market and various methods were proposed to realize it. In this paper, a thin size 3D-2D convertible display using a pinhole array on a polarizer is proposed. The thickness of the proposed system can be below one centimeter. Additionally, the use of a pinhole array on a polarizer enhances the light efficiency of the proposed system in the 2D mode by more than ten times of the 3D mode. This is also essential since the 3D mode is an additional function for a 3D-2D convertible system. As a result, the 2D image quality of the proposed system can be compatible with the existing 2D displays for most aspects. The method is proven by experimental results.
KEYWORDS: Image segmentation, Cameras, Motion estimation, Image processing, Data acquisition, Image processing algorithms and systems, Video, Detection and tracking algorithms, 3D displays, 3D image processing
We present an algorithm for stereoscopic conversion of two-dimensional movie encoded in MPEG-2. The stereoscopic
algorithm consists of segmentation process and depth determination process. In the segmentation process, we segment
the image based on the dc information and the motion vector information encoded by MPEG-2. After the segmentation,
depth of each segment is determined by examining the motion vector and the overlapped region of the segment.
In spite of significant improvements in three-dimensional (3D) display fields, the commercialization of a 3D-only display system is not achieved yet. The mainstream of display market is a high performance two-dimensional (2D) flat panel display (FPD) and the beginning of the high-definition (HD) broadcasting accelerates the opening of the golden age of HD FPDs. Therefore, a 3D display system needs to be able to display a 2D image with high quality. In this paper, two different 3D-2D convertible methods based on integral imaging are compared and categorized for its applications. One method uses a point light source array and a polymer-dispersed liquid crystal and one display panel. The other system adopts two display panels and a lens array. The former system is suitable for mobile applications while the latter is for home applications such as monitors and TVs.
Recently, a floating display system based on integral imaging (InIm) was proposed. Though the floating display system based on InIm can provide moving picture with great feel of depth to the observer, it has limited expressible depth range because the expressible depth range of InIm is limited. In this paper, the expressible depth range of the floating display system based on InIm is analyzed based on the analysis on the expressible depth range of the InIm. Also, a depth-enhanced floating display system based on InIm is proposed. In the proposed depth-enhanced floating display system based on InIm, the lens array of the InIm is placed at the focal plane of the floating lens. Additionally, the seams on the lens array become less distinct since they are also placed at the focal plane of the floating lens. However, the size of the object changes when the object is out of the overall central depth plane. Thus, the size of objects in elemental image should be rescaled to display correct three-dimensional image.
KEYWORDS: 3D image reconstruction, 3D displays, Holograms, 3D image processing, Computer generated holography, Integral imaging, Spatial light modulators, Holography, Displays, 3D image enhancement
For large viewing-angle enhancement in three-dimensional (3D) display, a dynamic computer-generated holographic display system combined with integral imaging is proposed and implemented using a single phase-type spatial light modulator and an elemental lens array. For viewing-angle enhanced colorized 3D integral image display the computer-generated holograms have been synthesized and scaled for minimizing the color dispersion error in the hologram plane. Using the integral imaging and synthetic phase holography, we can get 3D images with full parallax and continuously varying viewing-angle range of +/-6 degree. Finally we show some experimental results that verify our concept.
KEYWORDS: LCDs, 3D displays, 3D image processing, Transmittance, 3D image enhancement, Integral imaging, Stereoscopy, Distortion, Imaging arrays, Imaging systems
In this paper, the authors propose a novel method to construct a wide viewing two-dimension/three-dimension convertible system with two parallel display devices. With changing the role of each display device, it is possible to convert the display mode between 2D and 3D electrically without any mechanical movement. In 2D display mode, the rear display is used as a white light source and the front display device displays the 2D images. In 3D display mode, the rear display device and the lens array construct 3D images, while the front display device displays electrical masks to enhance the viewing angle of the 3D images. Since the basic principle for 2D and 3D display modes are the same as that of LCD display and integral imaging respectively, other improved techniques for both display modes, which will be accomplished with the progress in researches, can be easily applied to the system. The proposed method is also demonstrated by some experimental results.
Integral imaging attracts much attention as an autostereoscopic
three-dimensional display technique for its many advantages. However, the disadvantage of integral imaging is that the expressible depth of three-dimensional image is limited and the image can be displayed only around the central depth plane. This paper proposes a depth- enhanced integral imaging with multiple central depth planes using multilayered display devices. Transparent display devices using liquid crystal are located in parallel to each other and incorporated into an integral imaging system in place of a conventional display device. As a result, the proposed method has multiple central depth planes and permits the limitation of expressible depth to be overcome. The principle of the proposed method is explained, and some experimental results are presented.
Recently, a two-dimension/three-dimension (2D/3D) convertible display system based on the integral imaging technique was proposed. The 2D/3D convertible system shares similar characteristics with integral imaging, but the analysis on the integral imaging cannot be directly applied to the 2D/3D convertible system. In this paper, the 2D/3D convertible display system is analyzed using wave optics to consider the effect of diffraction which plays an important role since the dimension of the 2D/3D convertible system is fairly small. First, the equation that governs the diffraction effect in the three-dimensional image display mode of the 2D/3D convertible system is derived assuming monochromatic and coherent point light source. Based on the equation, simulations and analyses of the 2D/3D convertible system are given.
KEYWORDS: Imaging systems, Integral imaging, 3D displays, Image enhancement, 3D image processing, Displays, Image processing, 3D applications, Stereoscopic displays, Glasses
Enhanced integral imaging system based on the image floating method is proposed. The integral imaging is one of the most promising methods among the autostereoscopic displays and the integrated image has the volumetric characteristics unlike the other stereoscopic images. The image floating is a common 3D display technique, which uses a big convex lens or a concave mirror to exhibit the image of a real object to the observer. The image floating method can be used to emphasize the viewing characteristics of the volumetric image and the noise image which is located on the fixed plane can be eliminated by the floating lens through the control of the focal length. In this paper, the solution of the seam noise and the image flipping of the integral imaging system is proposed using the image floating method. Moreover, the advanced techniques of the integral imaging system can be directly applied to the proposed system. The proposed system can be successfully applied to many 3D applications such as 3D television.
KEYWORDS: 3D displays, Displays, Integral imaging, 3D image processing, Imaging systems, 3D applications, Mirrors, Glasses, Image processing, Stereoscopic displays
New three-dimensional (3D) display system which combines two different display techniques is proposed. One of the techniques is integral imaging. The integral imaging system consists of a lens array and 2D display device, and the 3D image of the system is integrated by the lens array from the elemental images. The other technique is image floating, which uses a big convex lens or a concave mirror to exhibit the image of a real object to the observer. The electro-floating display system, which does not use the real object, needs the volumetric 3D display part because the floating display system cannot make the 3D image, but only carries the image closer to the observer. The integral imaging system can be adopted in the electro-floating display system, because the integrated image has the characteristics of the volumetric image within the viewing angle. Moreover, the many methods to enhance the viewing angle of the integral imaging system can be applied to the proposed system directly. The optimum value of the focal length of the floating lens is related to the central depth plane and the viewing angle. The proposed system can be successfully applied to many 3D applications such as 3D TV.
Integral imaging is a promising way of three-dimensional display because it provides observers with full parallax and continuous view points without the use of glasses. However, the limitation on the viewing angle and the expressible depth should be overcome for integral imaging to be applied to real systems. There have been various methods such as using mechanical movements or polarization switching to improve the viewing angle of integral imaging. In this paper, we propose a viewing angle enhanced integral imaging system without any mechanical movement or polarization control. This new viewing angle enhanced system utilizes lenticular lens sheet to angularly multiplex the information emitted from each pixel. Thus each pixel can affect multiple lenses and the effective area of an elemental image is increased, which brings the enhanced viewing angle. The simulation result of the proposed system and the experimental results are provided.
KEYWORDS: 3D displays, Distortion, Image quality, Integral imaging, Image analysis, Image enhancement, Image processing, Diffraction, 3D image processing, Analytical research
Recently, integral imaging attracted a lot of researchers as one powerful candidate for the three-dimensional display. However the limitation on the image depth of integral imaging is considered as the major obstacle for the practical use. Previously, a number of researches reported on the analysis of such limitation based on the diffraction of light. But there exists the severe mismatch between the experimental results and the simulation results that appear in the previous researches. In this paper, we propose a new assumption that exactly predicts the experimental results. Based on that new assumption, we propose the quantitative method to evaluate the image depth of integral imaging. We also propose the elemental image correction scheme that removes the distortion of the integrated image located out of the central depth plane.
Detecting depth information from a pickup image of integral imaging is of great importance since it is the first step for providing integral imaging systems with some flexibility against various system specifications and display environment. One problem of the depth extraction in the integral imaging is that the gap between the lens array and the elemental image plane cannot be determined exactly since the desired value depends on the object depth itself. Moreover, an object to be picked up is preferred to be located close to the lens array to ensure sufficient resolution in the elemental image and the detected depth profile, which makes the gap deviate too far from the focal length of the lens array, and thus disturbs exact depth extraction. In this paper, we propose a depth extraction method using a uniaxial crystal in addition to the lens array. The proposed system can detect the depth without prior knowledge of gap between the lens array and the elemental image plane. We explain the principle and verify it by simulation and experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.