Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by
the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance
and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology,
but many other applications require a software-defined encoder. High quality compression features needed for some
applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer
electronics device. An application may also need to efficiently combine compression with other functions such as noise
reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low
power, field upgradable implementation.
Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array
with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to
operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be
used to express all of the encoding processes including motion compensation, transform and quantization, and entropy
coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as
a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported
without the need for explicit global synchronization control.
An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork
processor device.
Coherent Logix has implemented a digital video stabilization algorithm for use in soldier systems and small unmanned
air / ground vehicles that focuses on significantly reducing the size, weight, and power as compared to current
implementations. The stabilization application was implemented on the HyperX architecture using a dataflow
programming methodology and the ANSI C programming language. The initial implementation is capable of stabilizing
an 800 x 600, 30 fps, full color video stream with a 53ms frame latency using a single 100 DSP core HyperX hx3100TM
processor running at less than 3 W power draw. By comparison an Intel Core2 Duo processor running the same base
algorithm on a 320x240, 15 fps stream consumes on the order of 18W. The HyperX implementation is an overall 100x
improvement in performance (processing bandwidth increase times power improvement) over the GPP based platform.
In addition the implementation only requires a minimal number of components to interface directly to the imaging sensor
and helmet mounted display or the same computing architecture can be used to generate software defined radio
waveforms for communications links. In this application, the global motion due to the camera is measured using a
feature based algorithm (11 x 11 Difference of Gaussian filter and Features from Accelerated Segment Test) and model
fitting (Random Sample Consensus). Features are matched in consecutive frames and a control system determines the
affine transform to apply to the captured frame that will remove or dampen the camera / platform motion on a frame-by-frame
basis.
Defect inspection metrology is an integral part of the yield ramp and process monitoring phases of semiconductor manufacturing. High aspect ratio structures have been identified in the ITRS as critical structures where there are no known manufacturable solutions for defect detection. We present case studies of a new inspection technology based on digital holography that addresses this need. Digital holography records the amplitude and phase of the wavefront from the target object directly to a single image acquired by a CCD camera. Using deep ultraviolet laser illumination, digital holography is capable of resolving phase differences corresponding to height differences as small as several nanometers. Thus, the technology is well suited to the task of finding defects on semiconductor wafers. We present a study of several defect detection benchmark wafers, and compare the results of digital holographic inspection to other wafer inspection technologies. Specifically, digital holography allows improved defect detection on high aspect ratio features, such as improperly etched contacts. In addition, the phase information provided by digital holography allows us to visualize the topology of defects, and even generate three-dimensional images of the wafer surface comparable to scanning electron microscope (SEM) images. These results demonstrate the unique defect detection capabilities of digital holography.
Automated image registration based on pattern recognition is a critical procedure in many applications of machine vision and is essential for accurate navigation and change detection. In this paper, an overview of the specific applications of image registration in wafer inspection is given, followed by a case study in the application of image registration for direct to digital holography (DDH) wafer inspection. A complete system of novel algorithms for holographic image capable of accepting a variety of data streams as inputs: (1) complex frequency data; (2) complex spatial data; (3) magnitude of data extracted from holograms; (4) phase data extracted from holograms; and (5) intensity-only data. This flexibility facilitates the development of faster, more reliable, and more efficient DDH processing systems, which is important in system optimization and production. In particular, the system enables the use of the full complex wavefront, which contains both reflectance and structural topology information, in the registration process. The added information contained in the wavefront can be utilized for increased robustness and computational efficiency. Both the theory and implementation of the proposed registration system are briefly described within the framework of DDH processing for wafer inspection tasks. Several examples of defect detection and wafer alignment are given with estimates of accuracy and robustness.
C. Thomas, Tracy Bahm, Larry Baylor, Philip Bingham, Steven Burns, Matt Chidley, Long Dai, Robert Delahanty, Christopher Doti, Ayman El-Khashab, Robert Fisher, Judd Gilbert, James Goddard, Gregory Hanson, Joel Hickson, Martin Hunt, Kathy Hylton, George John, Michael Jones, Ken Macdonald, Michael Mayo, Ian McMackin, Dave Patek, John Price, David Rasmussen, Louis Schaefer, Thomas Scheidt, Mark Schulze, Philip Schumaker, Bichuan Shen, Randall Smith, Allen Su, Kenneth Tobin, William Usry, Edgar Voelkl, Karsten Weber, Paul Jones, Robert Owen
KEYWORDS: Holograms, Digital holography, Holography, Semiconducting wafers, Cameras, Deep ultraviolet, Spatial frequencies, Beam splitters, Digital video recorders, Fourier transforms
A method for recording true holograms directly to a digital video medium in a single image has been invented. This technology makes the amplitude and phase for every pixel of the target object wave available. Since phase is proportional wavelength, this makes high-resolution metrology an implicit part of the holographic recording. Measurements of phase can be made to one hundredth or even one thousandth of a wavelength, so the technology is attractive for dining defects on semiconductor wafers, where feature sizes are now smaller than the wavelength of even deep UV light.
The automatic classification of defects found on semiconductor wafers using a scanning electron microscope (SEM) is a complex task that involves many steps. The process includes re- detecting the defect, measuring attributes of the defect, and automatically assigning a classification. In many cases, especially during product ramp-up, and when multiple products are manufactured in the same line, there are few training examples for an automatic defect classification (ADC) system. This condition presents a problem for traditional supervised parametric and nonparametric learning techniques. In this paper we investigate the attributes of several approaches to ADC and compare their performance under a variety of available training data scenarios. We have selected to characterize the attributes and performance of a traditional K-nearest neighbor classifier, probabilistic neural network (PNN), and rule-based classifier in the context of SEM ADC. The PNN classifier is a nonparametric supervised classifier that is built around a radial basis function (RBF) neural network architecture. A basic summary of the PNN will be presented along with the generic strengths and weakness described in the literature and observed with actual semiconductor defect data. The PNN classifier is able to manage conditions such as non-convex class distributions and single class multiple clusters in feature space. A rule-based classifier producing built-in core classes provided by the Applied Materials SEMVision tool will be characterized in the context of both few examples and no examples. An extensive set of fab generated data is used to characterize the performance of these ADC approaches. Typical data sets contain from 30 to greater than 200. The number of classes in the data set range from 4 to more than 12. The conclusions reached from this analysis indicate that the strengths of each method are evident under specific conditions that are related to different stages within the VLSI yield curve, and to the number of different products in the line.
Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.
This paper describes an algorithm for the automatic segmentation and representation of surface structures and non-uniformities in an industrial setting. The automatic image processing and analysis algorithm is developed as part of a complete on-line web characterization system of a paper making process at the wet end. The goal is to: (1) link certain types of structures on the surface of the web to known machine parameter values, and (2) find the connection between detected structures at the beginning of the line and defects seen on the final product. Images of the pulp mixture, carried by a fast moving table, are obtained using a stroboscopic light and a CCD camera. This characterization algorithm succeeded where conventional contrast and edge detection techniques failed due to a poorly controlled environment. The images obtained have poor contrast and contain noise caused by a variety of sources.
The paper industry has long had a need to better understand and control its papermaking process upstream, specifically at the wet end in the forming section of a paper machine. A vision-based system is under development that addresses this need by automatically measuring and interpreting the pertinent paper web parameters at the wet end in real time. The wet-end characterization of the paper web by a vision system involves a 4D measurement of the slurry in real time. These measurements include the 2D spatial information, the intensity profile, and the depth profile. This paper describes the real-time depth profile measurement system for the high-speed moving slurry. A laser line-based measurement method is used with a high-speed programmable camera to directly measure slurry height. The camera is programmed with a profile algorithm, producing depth data at fast sampling rates. Analysis and experimentation have been conducted to optimize the system for the characteristics of the slurry and laser line image. On-line experimental results are presented.
The high-speed production of textiles with complicated printed patterns presents a difficult problem for a colorimetric measurement system. Accurate assessment of product quality requires a repeatable measurement using a standard color space, such as CIELAB, and the use of a perceptually based color difference formula, e.g. (Delta) ECMC color difference formula. Image based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. This research and development effort describes a benchtop, proof-of-principle system that implements a projection onto convex sets (POCS) algorithm for mapping component color measurements to standard tristimulus values and incorporates structural and color based segmentation for improved precision and accuracy. The POCS algorithm consists of determining the closed convex sets that describe the constraints on the reconstruction of the true tristimulus values based on the measured imperfect values. We show that using a simulated D65 standard illuminant, commercial filters and a CCD camera, accurate (under perceptibility limits) per-region based (Delta) ECMC values can be measured on real textile samples.
Inspection and identification of cylindrical or conical shaped objects presents a unique challenge for a machine vision system. Due to the circular nature of the objects it is difficult to image the whole object using traditional area cameras and image capture methods. This work describes a unique technique to acquire a 2D image of the entire surface circumference of a cylindrical/conical shaped object. The specific application of this method is the identification of large caliber ammunition rounds in the field as they are transported between or within vehicles. The proposed method utilizes a line scan camera in combination with high speed image acquisition and processing hardware to acquire images from multiple cameras and generate a single, geometrically accurate, surface image. The primary steps involved are the capture of multiple images as the ammunition moves by on the conveyor followed by warping to correct for the distortion induced by the curved projectile surface. The individual images are then tiled together to form one 2D image of the complete circumference. Once this image has been formed an automatic identification algorithm begins the feature extraction and classification process.
The approach presented in this work combines the high-speed nature of pixel-based processing with a standard feature-based classifier to obtain a fast, robust identification algorithm for artillery ammunition. The algorithm uses the Sobel kernel to estimate the vertical intensity gradient of an electronic image of a projectile's circumference. This operation is followed with a directed Hough transform at a theta of 0 degrees, resulting in a one-dimensional vector representing the magnitude and location of horizontal attributes. This sequence of operations generates a compact description of the attributes of interest which can be computed at high speed, has no threshold-based parameters, and is robust to degraded images. In the classification stage a fixed-length feature vector is generated by sampling the Hough vector at the spatial locations included in the union of attribute locations from each possible projectile type. The advantages of generating a feature set in this manner are that no high-level algorithms are necessary to detect the spatial location of attributes and the feature vector is compact. Features generated using this method have been used with a Mahalanobis distance, nearest mean classifier for the successful demonstration of a proof-of-concept system that automatically identifies 155 mm projectiles.
The use of moment invariants for the detection of flaws in automated image processing inspection of printed graphic material is investigated. Prior work with moment invariants has concentrated on twodimensional image pattern recognition. A major limitation in pattern recognition applications has been the segmentation of the image from its background. Automated image processing inspection of printed material does not suffer from this limitation because a standard image background exists. The potential for separating flawed and unflawed printed material using moment invariants is demonstrated with formal statistical experiments.
A digiuil image processing inspection system is under development at Oak Ridge National Laboratory that will locate image features on printed material and measure distances between them to accuracies of 0. 001 in. An algorithm has been developed for this system that can locate unique image features to subpixel accuracies. It is based on a least-squares fit of a paraboloid function to the surface generated by correlating a reference image feature against a test image search area. Normalizing the correlation surface makes the algorithm robust in the presence of illumination variations and local flaws. Subpixel accuracies of better than 1/16 of a pixel have been achieved using a variety of different reference image features.
A digital machine-inspection system is being developed at Oak Ridge National Laboratory to detect flaws on printed graphic images. The inspection is based on subtraction of a digitized test image from a reference image to determine the location, number, extent, and contrast of potential flaws. When performing subtractive analysis on the digitized information, two sources of errors in the amplitude of the difference image can develop: (1) spatial misregistration of the reference and test sample, or (2) random fluctuations in the printing process. Variations in printing and registration between samples will generate topological artifacts related to surface structure, which is referred to as edge noise in the difference image. Most feature extraction routines require that the difference image be relatively free of noise to perform properly. A novel algorithm has been developed to filter edge noise from the difference images. The algorithm relies on the a priori assumption that edge noise will be located near locations having a strong intensity gradient in the reference image. The filter is based on the structure of the reference image and is used to attenuate edge features in the difference image. The filtering algorithm, consisting of an image multiplication, a global intensity threshold, and an erosion/dilation, has reduced edge noise by 98% over the unfiltered image and can be implemented using off-the-shelf hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.