A 10.1-inch 2D/3D switchable display using an integrated single light-guide plate (LGP) with a trapezoidal lightextraction (TLE) film was designed and fabricated. The integrated single LGP was composed of inverted trapezoidal line structures made by attaching a TLE film on its top surface and cylindrical lens structures on its bottom surface. The top surface of the TLE film was also bonded to the bottom surface of an LCD panel to maintain a 3D image quality, which can be seriously deteriorated by the gap variations between the LCD panel and the LGP. The inverted trapezoidal line structures act as slit apertures of parallax barriers for 3D mode. Light beams from LED light sources placed along the left and right edges of the LGP bounce between the top and bottom surfaces of the LGP, and when they collide with the inclined surfaces of the inverted trapezoidal structures, they are emitted toward the LCD panel. Light beams from LED light sources arranged on the top and bottom edges of the LGP are emitted to the lower surface while colliding with the cylindrical lens structures, and are reflected to the front surface by a reflective film for 2D mode. By applying the integrated single LGP with a TLE film, we constructed a 2D/3D switchable display prototype with a 10.1-inch tablet panel of WUXGA resolution (1,200×1,920). Consequently, we showed light-field 3D and 2D display images without interference artifacts between both modes, and also achieved luminance uniformity of over 80%. This display easily generates both 2D and 3D images without increasing the thickness and power consumption of the display device.
To commercialize glasses-free 3D display more widely, the display device should also be able to express 2D images without image quality degradation. Moreover, the thickness of display panel including backlight unit (BLU), and the power consumption should not be increased too much, especially for mobile applications. In this paper, we present a 10.1-inch 2D-3D switchable display using an integrated single light guide plate (LGP) without increasing the thickness and power consumption. The integrated single LGP with a wedge shape is composed of prismatic line patterns on its top surface and straight bump patterns on its bottom surface. The prismatic line patterns, which are composed of micro prisms having the light aperture on one side, act as slit apertures of parallax barriers for 3D mode. The linear bump patterns arranged along the vertical direction scatter the light uniformly together with the reflective film disposed under the LGP for 2D mode. LED light sources are arranged as edge-lit in the left and right sides of the LGP for 2D mode, and on the top edge of the LGP with the wider thickness for 3D mode. Display modes can be simply switched by turning on and off the LED light sources, alternatively. Applying the integrated single LGP, we realized a 2D-3D switchable display prototype with a 10.1-inch tablet panel of WQXGA resolution (2,560 × 1,600), and showed the light-field 3D display with 27-ray mapping and 2D display. Consequently, we acquired brightness uniformity over 70% for 2D and 3D modes.
Light-field displays are good candidates in the field of glasses-free 3D display for showing real 3D images without decreasing the image resolution. Light-field displays can create light rays using a large number of projectors in order to express the natural 3D images. However, in light-field displays using multi-projectors, the compensation is very critical due to different characteristics and arrangement positions of each projector. In this paper, we present an enhanced 55- inch, 100-Mpixel multi-projection 3D display consisting of 96 micro projectors for immersive natural 3D viewing in medical and educational applications. To achieve enhanced image quality, color and brightness uniformity compensation methods are utilized along with an improved projector configuration design and a real-time calibration process of projector alignment. For color uniformity compensation, projected images from each projector are captured by a camera arranged in front of the screen, the number of pixels based on RGB color intensities of each captured image is analyzed, and the distributions of RGB color intensities are adjusted by using the respective maximum values of RGB color intensities. For brightness uniformity compensation, each light-field ray emitted from a screen pixel is modeled by a radial basis function, and compensating weights of each screen pixel are calculated and transferred to the projection images by the mapping relationship between the screen and projector coordinates. Finally, brightness compensated images are rendered for each projector. Consequently, the display shows improved color and brightness uniformity, and consistent, exceptional 3D image quality.
In this paper, we analyze the modulation transfer function (MTF) of coded aperture imaging in a flat panel display. The flat panel display with a sensor panel forms lens-less multi-view cameras through the imaging pattern of the modified redundant arrays (MURA) on the display panel. To analyze the MTF of the coded aperture imaging implemented on the display panel, we first mathematically model the encoding process of coded aperture imaging, where the projected image on the sensor panel is modeled as a convolution of the scaled object and a function of the imaging pattern. Then, system point spread function is determined by incorporating a decoding process which is dependent on the pixel pitch of the display screen and the decoding function. Finally, the MTF of the system is derived by the magnitude of the Fourier transform of the determined system point spread function. To demonstrate the validity of the mathematically derived MTF in the system, we build a coded aperture imaging system that can capture the scene in front of the display, where the system consists of a display screen and a sensor panel. Experimental results show that the derived MTF of coded aperture imaging in a flat panel display system well corresponds to the measured MTF.
In this paper, we proposed parallel processing method of 2 step wave field projection method using GPU. In the first step, 2D projection of wave field for 3D object is calculated by radial symmetric interpolation (RSI) method to the reference depth, and then in step 2 it is translated toward depth direction using Fresnel transformation. In each step, the object points are divided into small groups and processed in each CUDA cores in parallel. Experimental results show that proposed method is 5901 times faster than Rayleigh-Sommerfeld method for 1 million object points and full HD SLM resolution.
KEYWORDS: 3D modeling, 3D image processing, Cameras, 3D displays, 3D image reconstruction, Detection and tracking algorithms, Data modeling, Motion models, Kinematics, Head
This paper presents a human pose recognition method which simultaneously reconstructs a human volume based on ensemble of voxel classifiers from a single depth image in real-time. The human pose recognition is a difficult task since a single depth camera can capture only visible surfaces of a human body. In order to recognize invisible (self-occluded) surfaces of a human body, the proposed algorithm employs voxel classifiers trained with multi-layered synthetic voxels. Specifically, ray-casting onto a volumetric human model generates a synthetic voxel, where voxel consists of a 3D position and ID corresponding to the body part. The synthesized volumetric data which contain both visible and invisible body voxels are utilized to train the voxel classifiers. As a result, the voxel classifiers not only identify the visible voxels but also reconstruct the 3D positions and the IDs of the invisible voxels. The experimental results show improved performance on estimating the human poses due to the capability of inferring the invisible human body voxels. It is expected that the proposed algorithm can be applied to many fields such as telepresence, gaming, virtual fitting, wellness business, and real 3D contents control on real 3D displays.
In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel
display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel
behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly
redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack
of synthetically refocused images at various distances by using the shifting and averaging approach for the captured
coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images
for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial
depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture
the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer
so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the
depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating
refocused images at different depth levels, and refining the initial depth estimates.
In this paper, we present a fast hologram pattern generation method to overcome accumulation problem of point source
based method. Proposed method consists of two steps. In the first step, 2D projection of wave field for 3D object is
calculated by radial symmetric interpolation (RSI) method to the multiple reference depth planes. Then in the second
step, each 2D wave field is translated toward SLM plane by FFT based algorithm. Final hologram pattern is obtained by
adding them. The effectiveness of method is proved by computer simulation and optical experiment. Experimental
results show that proposed method is 3878 times faster than analytic method, and 226 times faster than RSI method.
KEYWORDS: Video, Visualization, Video processing, Video coding, Volume rendering, Computer programming, 3D modeling, 3D displays, 3D video compression, Data communications
A depth dilation filter is proposed for free viewpoint video system based on mixed resolution multi-view video plus depth (MVD). By applying gray scale dilation filter to depth images, foreground regions are extended to background region, and synthesis artifacts occur out of boundary edge. Thus, objective and subjective quality of view synthesis result is improved. A depth dilation filter is applied to inloop resampling part in encoding/decoding, and post processing part after decoding. Accurate view synthesis is important in virtual view generation for autostereoscopic display, moreover there are many coding tools which use view synthesis to reduce interview redundancy in 3D video coding such as view synthesis prediction (VSP) and depth based motion vector prediction (DMVP), and compression efficiency can be improved by accurate view synthesis. Coding and synthesis experiments are performed for performance evaluation of a dilation filter with MPEG test sequences. Dilation filter was implemented on the top of the MPEG reference software for AVC based 3D video coding. By applying a depth dilation filter, BD-rate gains of 0.5% and 6.0% in terms of PSNR of decoded views and synthesized views, respectively.
In this paper, we propose an efficient synthetic refocusing method from multiple coded aperture images for 3D user
interaction. The proposed method is applied to a flat panel display with a sensor panel which forms lens-less multi-view cameras. To capture the scene in front of the display, the modified uniformly redundant arrays (MURA) patterns are displayed on the LCD screen without the backlight. Through the imaging patterns on the LCD screen, MURA coded
images are captured in the sensor panel. Instead of decoding all coded images to synthetically generate a refocused
image, the proposed method only decodes one coded image corresponding to the refocusing image at a certain distance after circularly shifting and averaging all coded images. Further, based on the proposed refocusing method, the depth of an object in front of the display is estimated by finding the most focused image for each pixel through a stack of the refocused images at different depth levels. Experimental results show that the proposed method captures an object in front of the display, generates refocused images at different depth levels, and accurately determines the depth of an object including real human hands near the display
In this paper, we present a fast hologram pattern generation method by radial symmetric interpolation. In spatial domain,
concentric redundancy of each point hologram is removed by substituting the calculation of wave propagation with
interpolation and duplication. Also the background mask which represents stationary point in temporal domain is used to
remove temporal redundancy in hologram video. Frames are grouped in predefined time interval and each group shares
the background information, and hologram pattern of each time is updated only for the foreground part. The
effectiveness of proposed algorithm is proved by simulation and experiment.
In this paper, we propose a novel multi-view generation framework that considers the spatiotemporal consistency of each
synthesized multi-view. Rather than independently filling in the holes of individual generated images, the proposed
framework gathers hole information from each synthesized multi-view image to a reference viewpoint. The method then
constructs a hole map and a SVRL (single view reference layer) at the reference viewpoint before restoring the holes in
the SVRL, thereby generating a spatiotemporally consistent view. A hole map is constructed using depth information of
the reference viewpoint and the input/output baseline length ratio. Thus, the holes in the SVRL can also represent holes
in other multi-view images. To achieve temporally consistent hole filling in the SVRL, the restoration of holes in the
current SVRL is performed by propagating the pixel value of the previous SVRL. Further hole filling is performed using
a depth- and exemplar-based inpainting method. The experimental results showed that the proposed method generates
high-quality spatiotemporally consistent multi-view images in various input/output environments. In addition, the
proposed framework decreases the complexity of the hole-filling process by reducing repeated hole filling.
KEYWORDS: Image resolution, Video, 3D displays, 3D video streaming, Image processing, Image quality, Video compression, Detection and tracking algorithms, Stereoscopic displays, Cameras
For a full motion parallax 3D display, it is necessary to supply multiple views obtained from a series of different
locations. However, it is impractical to deliver all of the required views because it will result in a huge size of
bit streams. In the previous work, authors proposed a mixed resolution 3D video format composed of color and
depth information pairs with heterogeneous resolutions, and also suggested a view synthesis algorithm for mixed
resolution videos. This paper reports a more refined view interpolation method and improved results.
In this paper, an inversion-free subpixel rendering method that uses eye tracking in a multiview display is proposed. The
multiview display causes an inversion problem when one eye of the user is focused on the main region and the other eye
is focused on the side region. In the proposed method, the subpixel values are rendered adaptively depending on the eye
position of the user to solve the inversion problem. Also, to enhance the 3D resolution without the color artifact, the
subpixel rendering algorithm using subpixel area weighting is proposed instead of the pixel values. In the experiments,
36-view images were seen using active subpixel rendering with the eye tracking system in a four-view display.
KEYWORDS: Video, Video compression, 3D video compression, 3D video streaming, Data centers, Laser Doppler velocimetry, Image resolution, Video processing, Image quality, 3D displays
new 3D video format which consists of one full resolution mono video and half resolution left/right videos is proposed.
The proposed 3D video format can generate high quality virtual views from small amount of input data while preserving
the compatibility for legacy mono and frame compatible stereo video systems. The center view video is the same with
normal mono video data, but left/right views are frame compatible stereo video data. This format was tested in terms of
compression efficiency, rendering capability, and backward compatibility. Especially we compared view synthesis
quality when virtual views are made from full resolution two views or one original view and the other half resolution
view. For frame compatible stereo format, experiments were performed on interlaced method. The proposed format gives
BD bit-rate gains of 15%.
This study aims to promote the cubic effect by reproducing images with depth perception using chromostereopsis in
human visual perception. From psychophysical experiments based on the theory that the cubic effect depends on the
lightness of the background in the chromostereoptic effect and the chromostereoptic reversal effect, it was found that the
luminous cubic effect differs depending on the lightness of the background and the hue combination of the neighboring
colors.
Also, the layer of the algorithm-enhancing cubic effect that was drawn from the result of the experiment was classified
into the foreground, middle, and background layers according to the depth of the input image. For the respective
classified layer, the color factors that were detected through the psychophysical experiments were adaptively controlled
to produce an enhanced cubic effect that is appropriate for the properties of human visual perception and the
characteristics of the input image.
This paper presents an efficient depth map coding method based on color information in multi-view plus depth (MVD)
system. As compared to the conventional depth map coding in which depth video is separately coded, the proposed
scheme involves color information for depth map coding. In details, the proposed algorithm subsamples input depth data
along temporal direction to reduce the bit-rate, and non-encoded depth frames are recovered at the decoder side guided
by the motion information extracted from the decoded color video. The simulation results shows the high coding
efficiency of the proposed scheme, and it also shows that recovered depth frame are not much different from the
reconstructed one. Furthermore, it can even provide temporally consistent depth map which results in better subjective
quality for view-interpolation.
KEYWORDS: Cameras, Video, Video coding, Video compression, 3D vision, 3D video compression, Quality measurement, Scalable video coding, Image quality, Matrices
One of the critical issues to successful service of 3D video is how to compress huge amount of multi-view video data
efficiently. In this paper, we described about geometric prediction structure for multi-view video coding. By exploiting
the geometric relations between each camera pose, we can make prediction pair which maximizes the spatial correlation
of each view. To analyze the relationship of each camera pose, we defined the mathematical view center and view
distance in 3D space. We calculated virtual center pose by getting mean rotation matrix and mean translation vector. We
proposed an algorithm for establishing the geometric prediction structure based on view center and view distance. Using
this prediction structure, inter-view prediction is performed to camera pair of maximum spatial correlation. In our
prediction structure, we also considered the scalability in coding and transmitting the multi-view videos. Experiments are
done using JMVC (Joint Multiview Video Coding) software on MPEG-FTV test sequences. Overall performance of
proposed prediction structure is measured in the PSNR and subjective image quality measure such as PSPNR.
This paper proposes a novel 2D-to-3D conversion system based on visual attention analysis. The system was able to
generate stereoscopic video from monocular video in a robust manner with no human intervention. According to our
experiment, visual attention information can be used to provide rich 3D experience even when depth cues from
monocular view are not enough. Using the algorithm introduced in the paper, 3D display users can watch 2D media in
3D. In addition, the algorithm can be embedded into 3D displays in order to deliver better viewing experience with more
immersive feeling. Using visual attention information to give a 3D effect is first tried in this research as far as we know.
We present a simple depth estimation framework for 2D-to-3D media conversion. The perceptual depth information
from monocular image is estimated by the optimal use of relative height cue, which is one of well-known depth recovery
cues. The height depth cue is very common in photographic images. We propose a novel line tracing method and depth
refinement filter as core of our depth estimation framework. The line tracing algorithm traces strong edge positions to
generate an initial staircase depth map. The initial depth map is further improved by a recursive depth refinement filter.
We present visual results from depth estimation and stereo image generation.
We present a collection of principles to compare two sets
of color primaries for wide gamut displays. A new, algorithmic threedimensional
method to find optimal color primaries both for threeprimary
and multiprimary displays is described. The method was
implemented in a computer program. The resulting optimal color
primary sets are discussed. We show that two-dimensional methods
to find optimal color primaries by using a chromaticity diagram are
inferior to three-dimensional optimization techniques that include luminance
information.
This paper proposes a framework of colour preference control to satisfy the consumer's colour related emotion. A colour
harmony algorithm based on two-colour combinations is developed for displaying the images with several complementary colour pairs as the relationship of two-colour combination. The colours of pixels belonging to complementary colour areas in HSV colour space are shifted toward the target hue colours and there is no colour change for the other pixels. According to the developed technique, dynamic emotions by the proposed hue conversion can be improved and the controlled output image shows improved colour emotions in the preference of the human viewer. The psychophysical experiments are conducted to investigate the optimal model parameters to produce the most pleasant image to the users in the respect of colour emotions.
The purpose of this study is to examine gray matching between dark and ambient condition and to improve visibility
using result of gray matching experiment in mobile display and target luminance is 30000 lux for experiment. First of
all, for measuring visibility on ambient condition, the patch count experiment is conducted by investigating that how
many patches can be seen at original images under the ambient light. The visibility in ambient condition was significant
in comparison to dark condition. Next, the gray matching experiment is conducted by comparing gray patches between
dark and ambient condition using method of adjustment. The participants responded that the white or bright gray patch
could not find same brightness patch under ambient condition. To confirm the visibility improvement through the result
of gray matching experiment, visibility is measured under the ambient light after simple implementation. It was same
procedure of the first visibility experiment. After applying the gray matching curve, visibility was more improvement.
Statistic T test result between patches applied gray curve and maximum of dark condition was not significant. It means
that visibility was not different between original patches of dark condition and patches applied curve of ambient
condition.
Image acquisition devices inherently do not have color constancy mechanism like human visual system. Machine color constancy problem can be circumvented using a white balancing technique based upon accurate illumination estimation. Unfortunately, previous study can give satisfactory results for both accuracy and stability under various conditions. To overcome these problems, we suggest a new method: spatial and temporal illumination estimation. This method, an evolution of the Retinex and Color by Correlation method, predicts on initial illuminant point, and estimates scene-illumination between the point and sub-gamuts derived by from luminance levels. The method proposed can raise estimation probability by not only detecting motion of scene reflectance but also by finding valid scenes using different information from sequential scenes. This proposed method outperforms recently developed algorithms.
Due to subtle misalignment of optical components in the fabrication process, images projected by an optical light
modulator have severe line artifact along the direction of the optical scan. In this paper, we propose a novel methodology
to calibrate the modulator and generate the compensate image for the misaligned optical modulator in order to eliminate
the line artifact. A camera system is employed to construct Luminance Transfer Function (LTF) that characterizes the
optical modulator array. Spatial uniformity is obtained by redefining the dynamic range and compensating the
characteristic curvature of the LTF for each optical modulator array element. Simulation results show significant
reduction in the visibility of line artifact.
A series of psychophysical experiments using paired comparison method was performed to investigate various visual
attribute affecting image quality of a mobile display. An image quality difference model was developed to show high
correlation with visual results. The result showed that Naturalness and Clearness are the most significant attributes
among the perceptions. A colour quality difference model based on image statistics was also constructed and it was
found colour difference and colour naturalness are important attributes for predicting image colour quality difference.
We develop a methodology to find the optimal memory color (colors of familiar objects) boundary in YCbCr color space and a local image adjustment technique called preferred color reproduction (PCR) to improve image quality. The optimal memory color boundary covers most familiar object colors taken under various viewing conditions. The PCR algorithm is developed based on the idea that colors of familiar objects (memory colors) are key factors in judging the naturalness of an image. The PCR algorithm is applied to pixels detected as having a memory color. Memory color detection is conducted using color information by checking if an input color is inside the predetermined memory color boundary. The PCR algorithm transforms colors inside the memory color boundary to be shifted toward the boundary of constant interval in the center. The PCR algorithm is applied to skin colors, and psychophysical experiments using real images were conducted to determine the best parameters for the algorithm resulting in the most preferred image.
This paper proposes a method of filtering a digital sensor image to efficiently reduce noise and to improve the sharpness of an image. To reduce the noise in an image captured by conventional image sensor, the proposed noise reduction filter selectively outputs one of results obtained by recursive temporal and spatial noise filtering values. By proposed noise filtering method, image detail can be well preserved and noise filtering artifacts which can be generated along the moving object boundary in image sequences by applying temporal noise filtering are prevented. Since the sharpness of noise filtered image can be inevitably deteriorated by noise filtering, the adaptive noise suppressed sharpening filter is also proposed. The proposed sharpening filter generates filter mask adaptively according to the pixel similarity information within filter mask and can obtain the continuous image quality by the easy-controllable gain control algorithm without noise boost-up in the smooth region.
On a plasma display panel (PDP), luminous elements of red, green, and blue have different time responses. Therefore, a colored trails and edges appear behind and in front of moving objects. In order to reduce the color artifacts, this paper proposes a motion-based discoloring method. Discoloring values are modeled as linear functions of a motion vector to reduce hardware complexity. Experimental results show that the proposed method has effectively removed the colored trails and edges of moving objects. Moreover, the clear image sequences have been observed compared to the conventional ones.
The preferred skin color reproduction algorithm is developed for the mobile display especially for a portrait image with one person as a main object occupying most of the screen. According to the developed technique, the skin area in an image is detected using color value of each pixel in YCbCr color space. The skin color boundary is defined as a quadrangle in Cb-Cr plane. The colors of pixels belonging to skin area are shifted toward the preferred colors while there is no color change for the other pixels. The psychophysical experiments are conducted to investigate the optimal model parameters providing the most pleasant image to the users. Then, the performance of developed algorithm is tested using the optimal parameters. The result shows that for more than 95% cases, the observers prefer the images treated with the developed algorithm compared to the original image. It is believed that the developed algorithm can be applied to the mobile application to improve the image quality regardless the input sources.
The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.
The term "color temperature" usually represents the color of light source or the white point of image displaying devices. The color temperature can be an effective bridge between images' characteristic and human's perceptual temperature feeling against the images. It can capture human's high-level perception to improve image browsing. In this paper, our goal is to demonstrate how well the color temperature is connected to such human's perception. We demonstrate the method of subjective experiment, a color temperature mapping range for each perceptual category, and browsing accuracy of the color temperature ranges obtained from the experiment. The compact representation for the color temperature and some of usage scenarios are explained as presented in the amendment of MPEG-7 standard.
The term "color temperature" represents the color of light source or the white point of image displaying devices such as TV and PC monitor. By controlling the color temperature, we can convert the reference white color of images. This is equivalent to the illuminant change, which alters all colors in the scene. In this paper, our goal is to find an appropriate method of converting the color temperature in order to reproduce the user-preferred color temperature in video displaying devices. It is essential that the relative difference of color temperature between successive image frames should be well preserved as well as the appearance of images should seem natural after applying the user-preferred color temperature. In order to satisfy these conditions, we propose an adaptive color temperature conversion method that estimates the color temperature of an input image and determines the output color temperature in accordance with the value of the estimated one.
In this paper, the method to calculate the illuminant chromaticity of an image is proposed by combine the perceived illumination and highlight approach. The hybrid approach is more stable and accurate compared to each approach. The application for this algorithm is two-fold. For simple and quick implementation, perceived illumination is enough, and for more accurate case, hybrid approach can be used. And conversion of image illuminant chromaticity is also proposed. This can be applied into special effect for the images.
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.