It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.
Manufacturers of commercial display devices continuously try to improve the perceived image quality of their products. By applying postprocessing techniques on the incoming signal, they aim to enhance the quality level perceived by the viewer. These postprocessing techniques are usually applied globally over the whole image but may cause side effects, the visibility and annoyance of which differ with local content characteristics. To better understand and utilize this, a three-phase experiment was conducted where observers were asked to score images that had different levels of quality in their regions of interest and in the background areas. The results show that the region of interest has a greater effect on the overall quality of the image than the background. This effect increases with the increasing quality difference between the two regions. Based on the subjective data we propose a model to predict the overall quality of images with different quality levels in different regions. This model, which is constructed on empirical bases, can help craft weighted objective metrics that can better approximate subjective quality scores.
Research has shown that when viewing still images, people will look at these images in a different manner if instructed
to evaluate their quality. They will tend to focus less on the main features of the image and, instead, scan the entire image
area looking for clues for its level of quality. It is questionable, however, whether this finding can be extended to videos
considering their dynamic nature. One can argue that when watching a video the viewer will always focus on the
dynamically changing features of the video regardless of the given task. To test whether this is true, an experiment was
conducted where half of the participants viewed videos with the task of quality evaluation while the other half were
simply told to watch the videos as if they were watching a movie on TV or a video downloaded from the internet. The
videos contained content which was degraded with compression artifacts over a wide range of quality. An eye tracking
device was used to record the viewing behavior in both conditions. By comparing the behavior during each task, it was
possible to observe a systematic difference in the viewing behavior which seemed to correlate to the quality of the
videos.
Reliably assessing overall quality of JPEG/JPEG2000 coded images without having the original image as a reference is still challenging, mainly due to our limited understanding of how humans combine the various perceived artifacts to an overall quality judgment. A known approach to avoid the explicit simulation of human assessment of overall quality is the use of a neural network. Neural network approaches usually start by selecting active features from a set of generic image characteristics, a process that is, to some extent, rather ad hoc and computationally extensive. This paper shows that the complexity of the feature selection procedure can be considerably reduced by using dedicated features that describe a given artifact. The adaptive neural network is then used to learn the highly nonlinear relationship between the features describing an artifact and the overall quality rating. Experimental results show that the simplified feature selection procedure, in combination with the neural network, indeed are able to accurately predict perceived image quality of JPEG/JPEG2000 coded images.
This paper presents a novel system that employs an adaptive neural network for the no-reference assessment of perceived
quality of JPEG/JPEG2000 coded images. The adaptive neural network simulates the human visual system as a black
box, avoiding its explicit modeling. It uses image features and the corresponding subjective quality score to learn the
unknown relationship between an image and its perceived quality. Related approaches in literature extract a considerable
number of features to form the input to the neural network. This potentially increases the system's complexity, and
consequently, may affect its prediction accuracy. Our proposed method optimizes the feature-extraction stage by
selecting the most relevant features. It shows that one can largely reduce the number of features needed for the neural
network when using gradient-based information. Additionally, the proposed method demonstrates that a common
adaptive framework can be used to support the quality estimation for both compression methods. The performance of the
method is evaluated with a publicly available database of images and their quality score. The results show that our
proposed no-reference method for the quality prediction of JPEG and JPEG2000 coded images has a comparable
performance to the leading metrics available in literature, but at a considerably lower complexity.
Manufacturers of commercial display devices continuously try to improve the perceived image quality of their products.
By applying some post processing techniques on the incoming image signal, they aim to enhance the quality level
perceived by the viewer. Applying such techniques may cause side effects on different portions of the processed image.
In order to apply these techniques effectively to improve the overall quality, it is vital to understand how important
quality is for different parts of the image. To study this effect, a three-phase experiment was conducted where observers
were asked to score images which had different levels of quality in their saliency regions than in the background areas.
The results show that the saliency area has a greater effect on the overall quality of the image than the background. This
effect increases with the increasing quality difference between the two regions. It is, therefore, important to take this
effect into consideration when trying to enhance the appearance of specific image regions.
The Single Stimulus (SS) method is often chosen to collect subjective data testing no-reference objective metrics, as it is
straightforward to implement and well standardized. At the same time, it exhibits some drawbacks; spread between
different assessors is relatively large, and the measured ratings depend on the quality range spanned by the test samples,
hence the results from different experiments cannot easily be merged . The Quality Ruler (QR) method has been
proposed to overcome these inconveniences. This paper compares the performance of the SS and QR method for
pictures impaired by Gaussian blur. The research goal is, on one hand, to analyze the advantages and disadvantages of
both methods for quality assessment and, on the other, to make quality data of blur impaired images publicly available.
The obtained results show that the confidence intervals of the QR scores are narrower than those of the SS scores. This
indicates that the QR method enhances consistency across assessors. Moreover, QR scores exhibit a higher linear
correlation with the distortion applied. In summary, for the purpose of building datasets of subjective quality, the QR
approach seems promising from the viewpoint of both consistency and repeatability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.