Video format conversion is required if the received video format
mismatches the display format (spatially and/or temporally). In
addition, a high-quality video format converter can successfully be
used to eliminate movie judder (2:2 and 3:2 pull down) by applying
motion-compensated temporal up-conversion techniques which provides
a smooth motion portrayal. In this paper we present our architecture, the associated algorithms which provide high-quality
video format conversion, and there mutual implications.
Video Processing algorithms, and in particular those found in high-end television receivers, often have challenging demands for system resources. Therefore, most often, dedicated IC solutions are proposed to meet both the system and economic constraints. However as the functional requirements increase and as more diversity in terms of application support is required, dedicated solutions become less economic attractive, and hence a more heterogeneous architecture becomes more economic. In this paper, we present an architecture that is suited to run multiple very demanding video processing applications in real-time for consumer market applications.
The advent of High Definition television has increased the demand
for displays capable of displaying the higher resolution pictures.
However, the availability of high definition video broadcast is
still very limited. Therefore, one might expect that only in these
scarce cases one can fully appreciate the fine details associated
with a High Definition picture. However, this is not completely
correct. The high display resolution can be exploited by
up-converting the Standard Definition video towards the higher
resolution, creating a close to High Definition experience. In
this paper we will present a methodology to convert standard
definition video into video with up to a four times higher picture
resolution.
Arriving at improved picture quality often requires combining various video enhancement algorithms. These algorithms may be interdependent and global or local in nature. It is, therefore, far from trivial to tune the parameters that control an individual algorithm such that the optimal output quality is achieved. Moreover, due to the nature of various algorithms and their interdependencies consistency in both space and time is often sub-optimal. In this paper we introduce an enhancement model that guarantees spatio-temporal consistency while optimizing the parameters that control video enhancement.
De-interlacing of interlaced video doubles the number of lines per picture. As the video signal is sub-Nyquist sampled in the vertical and temporal dimension, standard up-conversion or interpolation filters cannot be applied. This may explain the large number of de-interlacing algorithms that have been proposed in the literature, ranging from simple intra-field de-interlacing methods to the advanced motion-compensated (MC) methods. MC de-interlacing methods are generally far superior over the non-MC ones. However, it seems difficult to combine robustness of a MC de-interlacing algorithm for incorrect motion vectors with the ability to preserve high spatial frequencies. The Majority-Selection de-interlacer, as proposed in this paper, provides a means to combine several strengths of individual de-interlacing algorithms into a single output signal.
KEYWORDS: Video, Video coding, Digital filtering, Receivers, Motion estimation, Televisions, Digital video discs, Semantic video, Computer programming, Image processing
Although in the literature comparisons of the effectiveness of MPEG-2 coding on interlaced and progressive sources have been reported, we think some very important aspects are missing in the research sofar. Particularly, the differences in resulting blocking artifacts are neglected, while usually only scenes with abundant vertical detail are evaluated. From our experiments, we conclude that the general opinion concerning the effectiveness of MPEG-2 coding on interlaced picture material is likely biased by the focus on challenging sequences only, while the omission of blockiness metrics in the evaluation significantly increases this bias further.
KEYWORDS: Motion estimation, Linear filtering, Signal to noise ratio, Motion analysis, Spatial frequencies, Optical filters, Statistical analysis, Error analysis, Video processing, Video
The use of interpolation filters in a motion estimator to realize sub-pixel shifts, may lead to unintentional preferences for some velocities over other. In this paper we analyze this phenomenon, focusing on the case of interlaced image data where the problem leads to the most pronounced errors. Linear interpolators, either applied directly or indirectly using generalized sampling are discussed. The conclusions are applicable to any type of motion estimator.
Many video processing algorithms can profit from motion information. Therefore, motion estimation is often an integral part of advanced video processing algorithms. This paper focuses on the estimation of true-motion vectors, that are required for scan-rate-conversion. Two recent motion estimator methods will be discussed. By combining these tow methods, the major drawbacks of the individual MEs is eliminated. The new resulting motion estimator proves to be superior over alternatives in an evaluation.
We have reported on a single-exposure dual-energy system based on computed radiography (CR) technology. In a clinical study conducted over a two year period, the dual-energy system proved to be highly successful in improving the detection (p=0.0005) and characterization (p=0.005) of pulmonary nodules when compared to conventional screen-film radiography. The basic components of our dual-energy detector system include source filtration with gadolinium to produce a bi-modal x-ray spectrum and a cassette containing four CR imaging plates. The front and back plates record the low-energy and high-energy images, respectively, and the middle two plates serve as an intermediate filter. Since our initial report, a number of improvements have been made to make the system more practical. An automatic registration algorithm based on image features has been developed to align the front and back image plates. There have been two improvements in scatter correction: a simple correction is now made to account for scatter within the multi-plate detector; and a correction algorithm is applied to account for scatter variations between patients. An improved basis material decomposition (BMD) algorithm has been developed to facilitate automatic operation of the algorithm. Finally, two new noise suppression techniques are under investigation: one adjusts the noise filtering parameters depending on the strength of edge signals in the detected image in order to greatly reduce quantum mottle while minimizing the introduction of artifacts; a second routine uses knowledge of the region of valid low-energy and high-energy image data to suppress noise with minimal introduction of artifacts. This paper is a synthesis of recent work aimed at improving the performance of dual-energy CR conducted at three institutions: Philips Medical Systems, the University of Wisconsin, and Duke University.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.