Our goals in hyperspectral point target detection have been to develop a methodology for algorithm comparison and to advance point target detection algorithms through the fundamental understanding of spatial and spectral statistics. In this paper, we review our methodology as well as present new metrics. We demonstrate improved performance by making better estimates of the covariance matrix. We have found that the use of covariance matrices of statistical stationary segments in the matched-filter algorithm improves the receiver operating characteristic curves; proper segment selection for each pixel should be based on its neighboring pixels. We develop a new type of local covariance matrix, which can be implemented in principal-component space and which also shows improved performance based on our metrics. Finally, methods of fusing the segmentation approach with the local covariance matrix dramatically improve performance at low false-alarm rates while maintaining performance at higher false-alarm rates.
A recently-developed technique of histogram-based segmentation of hyperspectral data allows for a plethora of segmentations. The user can specify the desired number of levels of segmentations, minimum number of pixels defining a peak, and degree of non-linearity in mapping from principal component floating values to histogram bins, all of which affect the derived segmentation. In the present work, we seek to extend previous work which arrives at a small range of clusters or segmentation levels from the image itself. We seek within this range to find "better" segmentations or possibly a unique representative segmentation. The method employed to achieve this goal starts with an over-fine segmentation, i.e. more segmentation levels than needed, and uses quantitative metrics to measure the "quality" of that segmentation and to guide a compression into a reduced segmentation. If the method has merit, different starts should compress down into comparable segmentations. Therefore a measure to establish the similarity of two or more segmentations was developed. Different quantitative metrics were studied and several modes of compression were examined. Some impressive results are presented but the methods are still not robust with respect to segmentation starts and are image dependent as to the best modes of compression.
In earlier work, we have shown that starting with the first two or three principal component images, one could form a two or three-dimensional histogram and cluster all pixels on the basis of the proximity to the peaks of the histogram. Here, we discuss two major issues which arise in all classification/segmentation algorithms. The first issue concerns the desired range of segmentation levels. We explore this issue by means of plots of histogram peaks versus the scaling parameter used to map into integer bins. By taking into account the role of Pmin, the minimum definition of a peak in the histogram, we demonstrate the viability of this approach. The second issue is that of devising a merit function for assessing segmentation quality. Our approach is based on statistical tests used in the Automatic Classification of Time Series (ACTS) algorithm and is shown to support and be consistent with the histogram plots.
Several approaches for segmenting hyperspectral data and automatically detecting unusual objects in natural scenes are discussed. We demonstrate segmentations of hyperspectral imagery based on of the most significant principal components of the hyperspectral data cube. Several applications of the segmented data are treated. Digital morphological operations can be used to isolate segments that match target criteria. Alternatively, background segments can be used to define background endmembers; pixels that are quantitatively spectrally different from the background can then be designated. Analog morphological operations can then be used for clutter rejection and for the detection of objects of particular size and shape.
Two techniques for detecting point targets in hyperspectral imagery are described. The first technique is based on the principal component analysis of hyperspectral data. We combine the information of the first two principal component analysis images; the result is a single image display "summary" of the data cube. The summary frame is used to define image segments. The statistics, means and variances, of each segment for the principal component images is calculated and a covariance matrix is constructed. The local pixel statistics and the segment statistics are then used to evaluate the extent to which each pixel differs from its surroundings. Point target pixels will have abnormally high values. The second technique operates on each band of the hypercube. A local anti-median of each pixel is taken and is weighted by the standard deviation of the local neighborhood. The results of each band are then combined. Results will be shown for visible, SWIR, and MWIR hyperspectral imagery.
Further refinements are presented on a simple and fast way to cluster/segment hyperspectral imagery. In earlier work, it was shown that, starting with the first 2 principal component images, one could form a 2-dimensional histogram and cluster all pixels on the basis of the proximity to the peaks. Issues that we analyzed this year are the proper weighting of the different principal components as a function of the peak shape and automatic methods based on an entropy measure to control the number of clusters and the segmentation of the data to produce the most meaningful results. Examples from both visible and infrared hyperspectral data will be shown.
A very simple and fast technique for clustering/segmenting hyperspectral images is described. The technique is based on the histogram of divergence images; namely, single image reductions of the hyperspectral data cube whose values reflect spectral differences. Multi-value thresholds are set from the local extrema of such a histogram. Two methods are identified for combining the information of a pair of divergence images: a dual method of combining thresholds generated from 1D histograms; and a true 2D histogram method. These histogram-based segmentations have a built-in fine to coarse clustering depending on the extent of smoothing of the histogram before determining the extrema. The technique is useful at the fine scale as a powerful single image display summary of a data cube or at the coarser scales as a quick unsupervised classification or a good starting point for an operator-controlled supervised classification. Results will be shown for visible, SWIR, and MWIR hyperspectral imagery.
In previous conference, we described a powerful class of temporal filters with excellent signal to clutter gains in evolving cloud scenes of consecutive IR sequences. The generic temporal filter is a zero-mean damped sinusoid, implemented recursively. The full algorithm, a triple temporal filter (TTF), consists of a sequence of two zero-mean damped sinusoids followed by an exponential averaging filter. The outputs of the first two filters are weakened at strong local edges. Analysis of a real-world database led to two optimized filters: one dedicated to noise-dominated scenes, the other to cloud clutter-dominated scenes; a dual-channel fusion of the two filters has also been implemented in hardware. This paper describes the post-processing and thresholding of the outputs of the filter algorithms. Post-processing on each output frame is implemented by a simple spatial algorithm which searches for maximum linear or pseudo-linear streaks made up of three linked pixels. The output histogram after post-processing is more robust to histogram- based thresholding and in some cases has improved signal to clutter ratio. The threshold is based on a simple level-occupancy (binary) histogram in which the first gap of 4 empty levels is determined and a threshold established based on this gap value and the number of occupied levels in the histogram above the gap. The post-processing and thresholding of the filter outputs are now operating in real-time hardware. Preliminary flight tests on a small aircraft of the algorithms in real-time operation demonstrate the viability of the approach on a moving platform. Specific examples and a video of the real-time performance on a fixed and moving platform will be presented at the conference.
We describe a recursive temporal filter based on a running estimate of the temporal variance followed by removal of the baseline variance of each pixel. The algorithm is designed for detection/tracking of 'point' targets moving at sub- pixel/frame velocities, 0.02 to 0.50 p/f, in noise-dominated scenarios on staring IR camera data. The technique responds to targets of either polarity. A preprocessing technique, morphological in origin but implemented by median filters, further improves the S/N sensitivity of the algorithm while restricting the result to positive contrast targets. The computationally simple algorithm has been implemented in hardware and real-time operation is in evaluation. The performance is characterized by some specific examples as well as plots over our extensive database of real data. Detection down to S/N approximately 3 or less and sensitivity to the appropriate range of velocities is demonstrated.
In an earlier conference, we introduced a powerful class of temporal filters, which have outstanding signal to clutter gains in evolving cloud scenes. The basic temporal filter is a zero-mean damped sinusoid, implemented recursively. Our final algorithm, a triple temporal filter, consists of a sequence of tow zero-mean damped sinusoids followed by an exponential averaging filter along with an edge suppression factor. The algorithm was designed, optimized and tested using a real world database. We applied the Simplex algorithm to a representative subset of our database to find an improved set of filter parameters. Analysis led to two improved filters: one dedicated to benign clutter conditions and the other to cloud clutter-dominated scenes. In this paper, we demonstrate how a fused version of the two optimized filters further improves performance in severe cloud clutter scenes. The performance characteristics of the filters will be detailed by specific examples and plots. Real time operation has been demonstrated on laboratory IR cameras.
To realize the potential of modern staring IR technology as the basis for an improved IRST, one requires better algorithms for detecting unresolved targets moving at fractions of a pixel per frame time. While available algorithms for such targets in white noise are reasonably good, they have high false alarm rates in non-stationary clutter, such as evolving clouds. We review here a new class of temporal filters which have outstanding signal to clutter gains in evolving clouds and still retain good signal to temporal noise sensitivity in blue sky or night data. The generic temporal filter is a damped sinusoid, implemented recursively. Our final algorithm, a triple temporal filter (TTF) based on six parameters, consists of a sequence of two damped sinusoids followed by an exponential averaging filter, with an edge suppression feature. Initial tests of the TTF filter concept demonstrated excellent performance in evolving cloud scenes. Three 'trackers' based on the TTF operate in real-time hardware on laboratory IR cameras including: an empirical initial version; and tow recent forms identified by an optimization routine. The latter two operate best in the two distinct realms: one for evolving cloud clutter, the other for temporal nose-dominated scenes such as blue sky or stagnant clouds. Results are presented both as specific examples and metric plots over an extensive database of local scenes with targets of opportunity.
KEYWORDS: Clouds, Digital filtering, Detection and tracking algorithms, Optical filters, Cameras, Databases, Electronic filtering, Target detection, Interference (communication), Signal to noise ratio
To realize the potential of modern staring IR technology as the basis for an improved IRST, one requires better algorithms for detecting unresolved targets moving at fractions of a pixel per frame time. While available algorithms for such targets in white noise are reasonably good, they have high false alarm rates in non-stationary clutter, such as evolving clouds. We review here a new class of temporal filters which have outstanding signal to clutter gains in evolving clouds and still retain good signal to temporal noise sensitivity in blue sky or night data. The generic temporal filter is a damped sinusoid, implemented recursively. Our final algorithm, a triple temporal filter (TTF) based on six parameters, consists of a sequence of two damped sinusoids followed by an exponential averaging filter, along with an edge suppression feature. Initial tests of the TTF filter concept demonstrated excellent performance in evolving cloud scenes. Three 'trackers' based on the TTF operate in real-time hardware on laboratory IR cameras including: an empirical initial version; and two recent forms identified by an optimization routine. The latter two operate best in the two distinct realms: one for evolving cloud clutter, the other for temporal noise- dominated scenes such as blue sky or stagnant clouds. Results are presented both as specific examples and metric plots over an extensive database of local scenes with targets of opportunity.
The problem of detection of aircraft at long range in a background of evolving cloud clutter is treated. A staring infrared camera is favored for this application due to its passive nature, day/night operation, and rapid frame rate. The rapid frame rate increases the frame-to-frame correlation of the evolving cloud clutter; cloud-clutter leakage is a prime source of false alarms. Targets of opportunity in daytime imagery were used to develop and compare two algorithm approaches: banks of spatio-temporal velocity filters followed by dynamic-programming-based stage-to-stage association, and a simple recursive temporal filter arrived at from a singular-value decomposition analysis of the data. To quantify the relative performance of the two approaches, we modify conventional metrics for signal-to-clutter gains in order to make them more germane to consecutive frame real data processing. The temporal filter, in responding preferentially to pixels influenced by moving point targets over those influenced by drifting clouds, achieves impressive cloud-clutter suppression without requiring subpixel frame registration. The velocity filter technique is roughly half as effective in clutter suppression but is twice as sensitive to weak targets in white noise (close to blue sky conditions). The real-time hardware implementation of the temporal filter is far more practical.
In the companion paper, two algorithms for tracking point targets in consecutive frame staring IR imagery with evolving cloud clutter are described and compared by using representative example scenes. Here, our total data base of local airborne scenes with targets of opportunity are used for a more quantitative and comprehensive comparison. The use of real world data as well as our focus on temporal filtering over large number of consecutive frames triggered a search for more relevant metrics than those available. We present two new metrics which have most of the attributes sought. In each metric, gain is taken as a ratio of output to input signal to clutter. Maximum values rather than statistical measures are used for clutter. In the variation metric (VM), a temporal standard deviation for each pixel over 95 consecutive frames is computed and the maximum non-target result is taken as the input clutter. The input signal, a real target moving with sub-pixel velocity through sampled imagery, is estimated by a reference mean technique. Output signal and clutter are taken as maximum target and clutter affected pixels in algorithm filtered outputs. In the second metric, the use of an anti-median filter (AM) provides symmetric treatment of input and output as well as signal and clutter. The maximum target and non-target response to the AM filter on input frames and output frames defines the signal and clutter measures. Our set of real-world data is plotted as output versus input signal to clutter for each metric and each algorithm and the pros and cons of each metric is discussed. With either metric, the signal to clutter gain ratios are approximately 5 - 6 dB greater with the temporal filter algorithm than with the velocity filter algorithm.
We treat the problem of long range aircraft detection in the presence of evolving cloud clutter. The advantages of a staring infrared camera for this application include passive performance, day and night operation, and rapid frame rate. The latter increases frame correlation of evolving clouds and favors temporal processing. We used targets of opportunity in daytime imagery, which had sub-pixel velocities from 0.1 - 0.5 pixels per frame, to develop and assess two algorithmic approaches. The approaches are: (1) banks of spatio-temporal velocity filters followed by dynamic programming based stage-to-stage association, and (2) a simple recursive temporal filter suggested by a singular value decomposition of the consecutive frame data. In this paper, we outline the algorithms, present representative results in a pictorial fashion, and draw general conclusions on the relative performance. In a second paper, we quantify the relative performance of the two algorithms by applying newly developed metrics to extensive real world data. The temporal filter responds preferentially to pixels influenced by moving point targets over those influenced by drifting clouds and thus achieves impressive cloud clutter suppression without requiring sub-pixel frame registration. It is roughly twice as effective in clutter suppression when results are limited by cloud evolution. However when results are limited by temporal noise (close to blue sky conditions), the velocity filter approach is roughly twice as sensitive to weak targets in our velocity range. Real-time hardware implementation of the temporal filter is far more practical and is underway.
Our algorithm development for point target surveillance is closely meshed to our laboratory IR cameras. The two-stage approach falls into the category of `track before detect' and incorporates dynamic programming optimization techniques. The first stage generates merit scores for each pixel and suppresses clutter by spatial/temporal subtractions from N registered frames of data. The higher the value of the merit score, the more likely that a target is present. In addition to the merit score, the best track associated with each score is stored; together they comprise the merit function. In the second stage, merit functions are associated and dynamic programming techniques are used to create combined merit functions. Nineteen and thirteen frames of data are used to accumulate merit functions. Results using a total of 38 and 39 frames of data are presented for a set of simulated targets embedded in white noise. The result is a high probability of detection and low false alarm rate down to a signal to noise ratio of about 2.0. Preliminary results for some real targets (extracted from real scenes and then re- embedded in white noise) show a graceful degradation from the results obtained on simulated targets.
A survey is presented of algorithms for the display and enhancement of infrared images developed over the last several years by the author and his colleagues. Our algorithms were driven by the need to map the raw recorded signal (digitized to 12 bits for single frames grabbed with our cameras) into 8-bit values for soft-copy display or real-time automatic- contrast operation on live camera imagery. We group the algorithms into two broad classes: global monotonic mappings in which the radiometric trend from low to high in the recorded image is retained in the displayed images; and mappings which depart from the global/monotonic constraint for purposes of local contrast enhancement. In the discussion of global display algorithms, we compare the Histogram Projection algorithm devised several years ago and widely used in auto-contrast circuits with the standard technique of Histogram Equalization. For complex histograms and wide dynamic range scenes, a compromise or `hybrid' of the display allocation specified by these two histogram techniques is desirable.
Low contrast details in wide-dynamic-range IR imagery may well be lost in global, monotonic map- pings from raw signal levels to 8-bit displays. Local contrast enhancement techniques can bring out such detail through mappings which are no longer global or monotonic. Three groups of algorithms for this purpose are described and compared with regard to degree of enhancement, artifacts, and computa- tional costs. The groups are: local implementations of histogram projection; "modulo" processing, which is a triangular-shaped many-to-one mapping; and high frequency enhancement as implemented in the spatial domain but designed in the spatial frequency domain. Results are illustrated by a representa- tive subset of the numerous IR images surveyed.
Twelve-bit digitized images taken with PtSi Schottky barrier detector arrays have been processed on Sun work stations. Two techniques for 8-bit global display are compared: the standard method of histogram equalization and a newly devised technique of histogram projection. The latter assigns equal dynamic range to each occupied level, while the former
does so according to the density of the occupied levels. The projection technique generally gives distinctly superior results based on an extensive
set of indoor, outdoor, day, and night imagery. For cases in which the two algorithms have complementary advantages, the techniques can be combined in effect by a weighting of their distribution functions, which often gives the desirable features of each. The new projection algorithm also can be used as a powerful and robust local contrast enhancement technique.
An alternative method of contrast enhancement, a global algorithm based on modular (sawtooth) displays, affords a comparable degree of enhancement at less computational cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.