Glare is always present in optical acquisition systems, such as photocameras or the eye-bulb. As a consequence, images captured by sensors do not represent an accurate reproduction of the scene, but rather a combination of scene content and glare. We discuss the reasons why this unwanted addition of spread light cannot be removed from an acquired image. To this aim, we cast the problem of glare-removal into an estimation task and focus on the aspects that make the unfolding of glare an ill-posed or ill-conditioned problem—such as nonlinearity, information loss, or eye model uncertainty. For each mechanism of glare formation, we point to the corresponding influence in terms of ill-posedness and ill-conditioning of the problem. We do not aim at proposing or reviewing solutions to the glare problem but rather at identifying more precisely the challenges it poses.
Several different implementations of the Retinex model have been derived from the original Land and McCann’s paper. This paper aims at presenting the Milano-Retinex family, a collection of slightly different Retinex implementations, developed by the Department of Computer Science of Universitá degli Studi di Milano. One important difference is in their goals: while the original Retinex aims at modeling vision, the Milano-Retinex family is mainly applied as an image enhancer, mimicking some mechanisms of the human vision system.
When we perform a visual analysis of a cosmic object photograph the contrast plays a fundamental role. A linear distribution of the observable values is not necessarily the best possible for the Human Visual System (HVS). In fact HVS has a non-linear response, and exploits contrast locally with different stretching for different lightness areas. As a consequence, according to the observation task, local contrast can be adjusted to make easier the detection of relevant information. The proposed approach is based on Spatial Color Algorithms (SCA) that mimic the HVS behavior. These algorithms compute each pixel value by a spatial comparison with all (or a subset of) the other pixels of the image. The comparison can be implemented as a weighted difference or as a ratio product over given sampling in the neighbor region. A final mapping allows exploiting all the available dynamic range. In the case of color images SCA process separately the three chromatic channels producing an effect of color normalization, without introducing channel cross correlation. We will present very promising results on amateur photographs of deep sky objects. The results are presented for a qualitative and subjective visual evaluation and for a quantitative evaluation through image quality measures, in particular to quantify the effect of algorithms on the noise. Moreover our results help to better characterize contrast measures.
The color rendering index (CRI) of a light source attempts to measure how much the color appearance of objects is
preserved when they are illuminated by the given light source. This problem is of great importance for various industrial
and scientific fields, such as lighting architecture, design, ergonomics, etc. Usually a light source is specified through the
Correlated Color Temperature or CCT. However two (or more) light sources with the same CCT but different spectral
power distribution can exist. Therefore color samples viewed under two light sources with equal CCTs can appear
different. Hence, the need for a method to assess the quality of a given illuminant in relation to color. Recently CRI has
had a renewed interest because of the new LED-based lighting systems. They usually have a color rendering index rather
low, but good preservation of color appearance and a pleasant visual appearance (visual appeal). Various attempts to
develop a new color rendering index have been done so far, but still research is working for a better one. This article
describes an experiment performed by human observers concerning the appearance preservation of color under some
light sources, comparing it with a range of available color rendering indices.
KEYWORDS: Databases, Light sources, Reflectivity, High dynamic range imaging, Human vision and color perception, Visualization, Current controlled current source, Visual process modeling, Cameras, Image processing
In this paper we present an upgraded version of an image database (IDB) presented here in 2003 to test color constancy and other kinds of visual and image processing algorithms. Big technology improvements have been done in the last ten years, however the motivations to present this upgrade are not only technological. We decided to address other visual features related to vision, like dynamic range and stereo vision. Moreover, to address computer vision related problems (e.g. illuminant or reflectance estimation) we have made available a set of data regarding objects, backgrounds and illuminants used. Here we present the characteristics of the images in the IDB, the choice made and the setup of acquisition.
We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.
The relationship between color and lightness appearance and the perception of depth has been studied since
a while in the field of perceptual psychology and psycho-physiology. It has been found that depth perception
affects the final object color and lightness appearance. In the stereoscopy research field, many studies have
been proposed on human physiological effects, considering e.g. geometry, motion sickness, etc., but few has
been done considering lightness and color information.
Goal of this paper is to realize some preliminar experiments in Virtual Reality in order to determine the
effects of depth perception on object color and lightness appearance. We have created a virtual test scene with
a simple 3D simultaneous contrast configuration. We have created three different versions of this scene, each
with different choices of relative positions and apparent size of the objects. We have collected the perceptual
responses of several users after the observation of the test scene in the Virtual Theater of the University of
Milan, a VR immersive installation characterized by a semi-cylindrical screen that covers 120° of horizontal
field of view from an observation distance of 3.5 m.
We present a description of the experiments setup and procedure, and we discuss the obtained results.
The interest in the production of stereoscopic contents is growing rapidly. Stereo material can be produced
using different solutions, from high level devices to standard digital cameras suitably coupled. In the latter
case, color correction in stereoscopic images is complex, due to possible different Color Filter Arrays or settings
in the two acquisition devices: users must often tune each camera separately, and this can lead to visible color
inter-differences in the stereo pair. The color correction methods often considered in the post-processing stage
of stereoscopic production are mainly based on global transformations between the two views, but this approach
can not completely recover relevant limits in the gamuts of each image due to color distortions. In this paper we
evaluate the application of perceptually-based spatial color computational models, based or inspired by Retinex
theory, to pre-filter the stereo pairs. Spatial color algorithms apply an unsupervised local color correction to
each pixel, based on a simulation of color perception mechanisms, and were proven to effectively reduce color
dominants and adjust local contrasts in images. We filtered different stereoscopic streams with visible color
differences between right and left frames, using a GPU version of the Random Spray Retinex (RSR) algorithm,
that applies in few seconds an unsupervised color correction, and the Automatic Color Equalization (ACE)
algorithm, that considers both White Patch and Gray World equalization mechanisms. We analyse the effect
of the computational models both by visual assessment and by considering the changes in the image gamuts
before and after the filtering.
In the last years, relevant efforts have been dedicated to the development of advanced technological solutions
for immersive visualization of Virtual Reality (VR) scenarios, with particular attention to stereoscopic images
formation. Among the various solution proposed, INFITECTM technology is particularly interesting, because it
allows the reproduction of a more accurate chromatic range than anaglyphs or polarization-based approaches.
Recently, this technology was adopted in the Virtual Theater of the University of Milan, an immersive VR
installment, used for research purposes in the fields of Human-machine interaction and photorealistic, perceptual-based,
visualization of virtual scenarios. In this paper, we want to present a first set of measures related to
the determination of an accurate chromatic, colorimetric and photometric characterization of this visualization
system. The acquired data are analyzed in order to evaluate the efective inter-calibration between the four
devices and for the determination of an accurate description of the actual effect of the INFITECTM technology.
This analysis will be the basis for the future integration of visual perception and color appereance principles in
the visualization pipeline, and for the development of robust computational models and instruments for a correct
color management in the visualization of immersive virtual environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.