Better understanding of Multi Domain Battle (MDB) challenges in complex military environments may start by gaining a basic scientific appreciation of the level of generalization and scalability offered by Machine Learning (ML) solutions designed, trained and optimized to achieve a single, specific task, continuously daytime and nighttime. We examine the generalization and scalability promises of a modern deep ML solution, applied to a unique spatial-spectral dataset that consists of blackbody calibrated, longwave infrared spectra of a fixed target site containing three painted metal surrogate tanks deployed in a field of mixed vegetation. Data was collected at roughly six minute intervals, nearly continuously, for over a year. This includes collection in many atmospheric conditions (rain, snow, sleet, fog, etc.) throughout the year. This paper focuses on data collected by a Telops Hyper-Cam from a 65 meter observation tower located at slant range of roughly 550 meters, from the targets. The dataset is very complex. There are no obvious spectral signatures from the target surfaces. The complexity is due in part to the natural variations of the various types of vegetation, cloud presence, and the changing solar loading conditions over time. This is precisely the environment MDB applications must function in. We detail some of the many training sets extracted to train different deep learning stacked auto encoder networks. We present performance results with receiver operator characteristic curves, confusion matrices, metric-vs-time plots, and classification maps. We show performance of ML models trained with data from various time windows, including over complete diurnal cycles and their performance processing data from different days and environmental conditions.
KEYWORDS: Short wave infrared radiation, Data fusion, 3D modeling, Image fusion, Visualization, LIDAR, Data modeling, Image registration, Orthophoto maps, 3D image processing
We focus on the problem of spatial feature correspondence between images generated by sensors operating in different regions of the spectrum, in particular the Visible (Vis: 0.4-0.7 μm) and Shortwave Infrared (SWIR: 1.0-2.5 μm). Under the assumption that only one of the available datasets is geospatial ortho-rectified (e.g., Vis), this spatial correspondence can play a major role in enabling a machine to automatically register SWIR and Vis images, representing the same swath, as the first step toward achieving a full geospatial ortho-rectification of, in this case, the SWIR dataset. Assuming further that the Vis images are associated with a Lidar derived Digital Elevation Model (DEM), corresponding local spatial features between SWIR and Vis images can also lead to the association of all of the additional data available in these sets, to include SWIR hyperspectral and elevation data. Such a data association may also be interpreted as data fusion from these two sensing modalities: hyperspectral and Lidar. We show that, using the Scale Invariant Feature Transformation (SIFT) and Optimal Randomized RANdom Sample Consensus (RANSAC) algorithm, a software method can successfully find spatial correspondence between SWIR and Vis images for a complete pixel by pixel alignment. Our method is validated through an experiment using a large SWIR hyperspectral data cube, representing a portion of Los Angeles, California, and a DEM with associated Vis images covering a significantly wider area of Los Angeles.
We study the generalization and scalability behavior of a deep belief network (DBN) applied to a challenging long-wave infrared hyperspectral dataset, consisting of radiance from several manmade and natural materials within a fixed site located 500 m from an observation tower. The collections cover multiple full diurnal cycles and include different atmospheric conditions. Using complementary priors, a DBN uses a greedy algorithm that can learn deep, directed belief networks one layer at a time and has two layers form to provide undirected associative memory. The greedy algorithm initializes a slower learning procedure, which fine-tunes the weights, using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of spectral data and their labels, despite significant data variability between and within classes due to environmental and temperature variation occurring within and between full diurnal cycles. We argue, however, that more questions than answers are raised regarding the generalization capacity of these deep nets through experiments aimed at investigating their training and augmented learning behavior.
Following the public release of the Spectral and Polarimetric Imagery Collection Experiment (SPICE) dataset, a persistent imaging experiment dataset collected by the Army Research Laboratory (ARL), the data were analyzed and materials in the scene characterized temporally and spatially using radiance data. The noise equivalent spectral radiance provided by the sensor manufacturer was compared with instrument noise calculated from in-scene information, and found to be comparable given differences in laboratory setting and real-life conditions. The processed dataset have regular "inconsistent cubes," specifically for data collected immediately after blackbody measurements, which were automatically executed approximately at each hour mark. Omitting these erroneous data, three target detection algorithms (adaptive coherent/cosine estimator, spectral angle mapper, and spectral matched filter) were tested on the temporal data using two target spectra (noon and midnight). The spectral matched filter produced the best detection rate for both noon and midnight target spectra for a 24-hrs period.
We study the transfer learning behavior of a Hybrid Deep Network (HDN) applied to a challenging longwave infrared hyperspectral dataset, consisting of radiance from several manmade and natural materials within a fixed site located 500 m from an observation tower, over multiple full diurnal cycles and different atmospheric conditions. The HDN architecture adopted in this study stakes a number of Restricted Boltzmann Machines to form a deep belief network for generative pre-training, or initialization of weight parameters, and then combines with a discriminative learning procedure that fine-tune all of the weights jointly to improve the network’s performance. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of spectral data and their labels, despite of significant data variability observed between and within classes due to environmental and temperature variation, occurring within full diurnal cycles. We argue, however, that more question are raised than answers are provided regarding the generalization capacity of these deep nets through experiments aimed for investigating their training and transfer learning behavior in the longwave infrared region of the electromagnetic spectrum.
We continue to highlight the pattern recognition challenges associated with solid target spectral variability in the longwave infrared (LWIR) region of the electromagnetic spectrum for a persistent imaging experiment. The experiment focused on the collection and exploitation of LWIR hyperspectral imagery. We propose two methods for target detection, one based on the repeated-random-sampling trial adaptation to a single-class version of support vector machine, and the other based on a longitudinal data model. The defining characteristic of a longitudinal study is that objects are measured repeatedly through time and, as a result, data are dependent. This is in contrast to cross-sectional studies in which the outcomes of a specific event are observed by randomly sampling from a large population of relevant objects in which data are assumed independent. Researchers in the remote sensing community generally assume the problem of object recognition to be cross-sectional. Performance contrast is quantified using a LWIR hyperspectral dataset acquired during three consecutive diurnal cycles, and results reinforce the need for using data models that are more realistic to LWIR spectral data.
We introduce an algorithm based on morphological filters with the Stokes parameters that augments the daytime and nighttime detection of weak-signal manmade objects immersed in a predominant natural background scene. The approach features a tailored sequence of signal-enhancing filters, consisting of core morphological operators (dilation, erosion) and higher level morphological operations (e.g., spatial gradient, opening, closing) to achieve a desired overarching goal. Using representative data from the SPICE database, the results show that the approach was able to automatically and persistently detect with a high confidence level the presence of three mobile military howitzer surrogates (targets) in natural clutter.
We give updates on a persistent imaging experiment dataset, being considered for public release in a foreseeable future, and present additional observations analyzing a subset of the dataset. The experiment is a long-term collaborative effort among the Army Research Laboratory, Army Armament RDEC, and Air Force Institute of Technology that focuses on the collection and exploitation of longwave infrared (LWIR) hyperspectral imagery. We emphasize the inherent challenges associated with using remotely sensed LWIR hyperspectral imagery for material recognition, and show that this data type violates key data assumptions conventionally used in the scientific community to develop detection/ID algorithms, i.e., normality, independence, identical distribution. We treat LWIR hyperspectral imagery as Longitudinal Data and aim at proposing a more realistic framework for material recognition as a function of spectral evolution through time, and discuss limitations. The defining characteristic of a longitudinal study is that objects are measured repeatedly through time and, as a result, data are dependent. This is in contrast to cross-sectional studies in which the outcomes of a specific event are observed by randomly sampling from a large population of relevant objects in which data are assumed independent. Researchers in the remote sensing community generally assume the problem of object recognition to be cross-sectional. But through a longitudinal analysis of a fixed site with multiple material types, we quantify and argue that, as data evolve through a full diurnal cycle, pattern recognition problems are longitudinal in nature and that by applying this knowledge may lead to better algorithms.
Our first observations using the longwave infrared (LWIR) hyperspectral data subset of the Spectral and Polarimetric Imagery Collection Experiment (SPICE) database are summarized in this paper, focusing on the inherent challenges associated with using this sensing modality for the purpose of object pattern recognition. Emphases are also put on data quality, qualitative validation of expected atmospheric spectral features, and qualitative comparison against another dataset of the same site using a different LWIR hyperspectral sensor. SPICE is a collaborative effort between the Army Research Laboratory, U.S. Army Armament RDEC, and more recently the Air Force Institute of Technology. It focuses on the collection and exploitation of longwave and midwave infrared (LWIR and MWIR) hyperspectral and polarimetric imagery. We concluded from this work that the quality of SPICE hyperspectral LWIR data is categorically comparable to other datasets recorded by a different sensor of similar specs; and adequate for algorithm research, given the scope of SPICE. The scope was to conduct a long-term infrared data collection of the same site with targets, using both sensing modalities, under various weather and non-ideal conditions. Then use the vast dataset and associated ground truth information to assess performance of the state of the art algorithms, while determining performance degradation sources. The expectation is that results from these assessments will spur new algorithmic ideas with the potential to augment pattern recognition performance in remote sensing applications. Over time, we are confident the SPICE database will prove to be an asset to the wide open remote sensing community.
The proposed paper recommends a new anomaly detection algorithm for polarimetric remote sensing applications based on the M-Box covariance test by taking advantage of key features found in a multi-polarimetric data cube. The paper demonstrates: 1) that independent polarization measurements contain information suitable for manmade object discrimination from natural clutter; 2) analysis between the variability exhibited by manmade objects relative to natural clutter; 3) comparison between the proposed M-Box covariance test with Stokes parameters S0 and S1, DoLP, RX Stokes, and PCA RX-Stokes; and finally 4) the data used for the comparison spans a full24-hour measurement.
A multistage algorithm suite is proposed for a specific target detection/verification scenario, where a visible/near
infrared hyperspectral (HS) sample is assumed to be available as the only cue from a reference image frame. The target
is a suspicious dismount. The suite first applies a biometric based human skin detector to focus the attention of the
search. Using as reference all of the bands in the spectral cue, the suite follows with a Bayesian Lasso inference stage
designed to isolate pixels representing the specific material type cued by the user and worn by the human target (e.g.,
hat, jacket). In essence, the search focuses on testing material types near skin pixels. The third stage imposes an
additional constraint through RGB color quantization and distance metric checking, limiting even further the search for
material types in the scene having visible color similar to the target visible color. Using the proposed cumulative
evidence strategy produced some encouraging range-invariant results on real HS imagery, dramatically reducing to zero
the false alarm rate on the example dataset. These results were in contrast to the results independently produced by each
one of the suite’s stages, as the spatial areas of each stage’s high false alarm outcome were mutually exclusive in the
imagery. These conclusions also apply to results produced by other standard methods, in particular the kernel SVDD
(support vector data description) and matched filter, as shown in the paper.
This paper describes the end-to-end processing of image Fourier transform spectrometry data taken of surrogate tank targets at Picatinny Arsenal in New Jersey with the long-wave hyper-spectral camera HyperCam from Telops. The first part of the paper discusses the processing from raw data to calibrated radiance and emissivity data. The second part discusses the application of a range-invariant anomaly detection approach to calibrated radiance, emissivity and brightness temperature data for different spatial resolutions and compares it to the Reed-Xiaoli detector.
We demonstrate that human skin biometrics in the visible to near infrared (VNIR) regime can be used as reliable features
in a multistage human target tracking algorithm suite. We collected outdoor VNIR hyperspectral data of human skin,
consisting of two human subjects of different skin types in the Fitzpatrick Scale (Type I [Very Fair] and Type III [White
to Olive]), standing side by side at seven ranges (50 ft to 370 ft) in a suburban background. At some of these ranges, the
subjects fall under the small target category. We propose a three-step approach: Step 1, reflectance retrieval; Step 2,
exploitation of absorption wavelength line at 577 nanometers, due to oxygenated hemoglobin in blood near the surface
of skin; and Step 3, matched filtering on candidate patches in the input imagery that successfully passed Step 2, using as
input all of the available bands in a spectral average representation of human skin. Step-3 functionality is only applied to
patches in the imagery showing evidence of human skin (Step 2 output). Regardless of the targets' kinematic states, the
approach produced some excellent results locating the presence of human skin in the example dataset, yielding zero false
alarms from potential confusers in the scene. The approach is expected to function as the focus of attention stage of a
multistage algorithm suite for human target tracking.
This paper offers an innovative image processing technique (smart data compression) for some Department of Defense
and Government users, who may be disadvantaged in terms of network and resource availability as they operate at the
tactical edge. Specifically, we propose using the concept of autonomous anomaly detection to significantly reduce the
amount of data transmitted to the disadvantaged user. The primary sensing modality is hyperspectral, where a national
asset is expected to fly over the region of interest acquiring and processing data in real time, but transmitting only the
corresponding data of scene anomalies, their spatial relationships in the imagery, range and navigational direction.
Results from a proof of principle experiment using real hyperspectral imagery are encouraging.
This paper introduces a simple approach for object tracking using hyperspectral (HS) spectral features. The approach
addresses the object tracking problem using a small object sample size. For a particular application, the key
challenges are: (i) Offline training cannot be utilized; (ii) motor vehicles of interest (targets) have a small
sample size (e.g., less than 9); and (iii) kinematic states of targets cannot be used for tracking, since
stationary targets are also of interest. Using HS imagery, this paper introduces a method that exploits the mean and
median averages spectra to estimate higher moments of the underlying (and unknown) probability distribution function
of spectra; in particular, skew tendency and sign. Tracking HS targets is then possible using this algorithm to test a
sequence of HS imagery, given that target spectra are initially cued by the user. The approach was implemented into a
commercially off the shelf workstation, featuring the IBM Cell Processor and GA-180 Add in Board. Preliminary
results are promising using a challenging HS data cube.
The Spectral and Polarimetric Imagery Collection Experiment (SPICE) is a collaborative effort between the US Army
ARDEC and ARL for the collection of mid-wave and long-wave infrared imagery using hyperspectral, polarimetric, and
broadband sensors.
The objective of the program is to collect a comprehensive database of the different modalities over the course of 1 to 2
years to capture sensor performance over a wide variety of adverse weather conditions, diurnal, and seasonal changes
inherent to Picatinny's northern New Jersey location.
Using the Precision Armament Laboratory (PAL) tower at Picatinny Arsenal, the sensors will autonomously collect the
desired data around the clock at different ranges where surrogate 2S3 Self-Propelled Howitzer targets are positioned at
different viewing perspectives at 549 and 1280m from the sensor location. The collected database will allow for: 1)
Understand of signature variability under the different weather conditions; 2) Development of robust algorithms; 3)
Development of new sensors; 4) Evaluation of hyperspectral and polarimetric technologies; and 5) Evaluation of fusing
the different sensor modalities.
In this paper, we will present the SPICE data collection objectives, the ongoing effort, the sensors that are currently
deployed, and how this work will assist researches on the development and evaluation of sensors, algorithms, and fusion
applications.
The midwave and longwave infrared regions of the electromagnetic spectrum contain rich information which can be
captured by hyperspectral sensors thus enabling enhanced detection of targets of interest. A continuous hyperspectral
imaging measurement capability operated 24/7 over varying seasons and weather conditions permits the evaluation of
hyperspectral imaging for detection of different types of targets in real world environments. Such a measurement site
was built at Picatinny Arsenal under the Spectral and Polarimetric Imagery Collection Experiment (SPICE), where two
Hyper-Cam hyperspectral imagers are installed at the Precision Armament Laboratory (PAL) and are operated
autonomously since Fall of 2009. The Hyper-Cam are currently collecting a complete hyperspectral database that
contains the MWIR and LWIR hyperspectral measurements of several targets under day, night, sunny, cloudy, foggy,
rainy and snowy conditions.
The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis capabilities using
a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high
accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1.
The MWIR version covers the 3 to 5 μm spectral range and the LWIR version covers the 8 to 12 μm spectral range.
This paper describes the automated operation of the two Hyper-Cam sensors being used in the SPICE data collection.
The Reveal Automation Control Software (RACS) developed collaboratively between Telops, ARDEC, and ARL
enables flexible operating parameters and autonomous calibration. Under the RACS software, the Hyper-Cam sensors
can autonomously calibrate itself using their internal blackbody targets, and the calibration events are initiated by user
defined time intervals and on internal beamsplitter temperature monitoring. The RACS software is the first software
developed for COTS hyperspectal sensors that allows for full autonomous data collection capability for the user. The
accuracy of the automatic calibration was characterized and is presented in this paper.
We present a proof-of-principle demonstration using Sony's IBM Cell processor-based PlayStation 3 (PS3) to run-in
near real-time-a hyperspectral anomaly detection algorithm (HADA) on real hyperspectral (HS) long-wave infrared
imagery. The PS3 console proved to be ideal for doing precisely the kind of heavy computational lifting HS based
algorithms require, and the fact that it is a relatively open platform makes programming scientific applications feasible.
The PS3 HADA is a unique parallel-random sampling based anomaly detection approach that does not require prior
spectra of the clutter background. The PS3 HADA is designed to handle known underlying difficulties (e.g., target
shape/scale uncertainties) often ignored in the development of autonomous anomaly detection algorithms. The effort is
part of an ongoing cooperative contribution between the Army Research Laboratory and the Army's Armament,
Research, Development and Engineering Center, which aims at demonstrating performance of innovative algorithmic
approaches for applications requiring autonomous anomaly detection using passive sensors.
Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may
be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from
background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an
algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new
autonomous target size invariant background characterization process, the Autonomous Background Characterization
(ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel
process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling;
and a fusion of results at the end. In this paper, we will demonstrate how different background characterization
approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the
Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization
methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was
collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse
Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.
Most hyperspectral (HS) anomaly detectors in the literature have been evaluated using a few HS imagery sets to
estimate the well-known ROC curve. Although this evaluation approach can be helpful in assessing detectors' rates of
correct detection and false alarm on a limited dataset, it does not shed lights on reasons for these detectors' strengths
and weaknesses using a significantly larger sample size. This paper discusses a more rigorous approach to testing and
comparing HS anomaly detectors, and it is intended to serve as a guide for such a task. Using randomly generated
samples, the approach introduces hypothesis tests for two idealized homogeneous sample experiments, where model
parameters can vary the difficulty level of these tests. These simulation experiments are devised to address a more
generalized concern, i.e., the expected degradation of correct detection as a function of increasing noise in the
alternative hypothesis.
Hyperspectral ground to ground viewing perspective presents major challenges for autonomous window based detection.
One of these challenges has to do with object scales uncertainty that occur when using a window-based detection
approach. In a previous paper, we introduced a fully autonomous parallel approach to address the scale uncertainty
problem. The proposed approach featured a compact test statistic for anomaly detection, which is based on a principle of
indirect comparison; a random sampling stage, which does not require secondary information (range or size) about the
targets; a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during
random sampling; and a fusion of results at the end. In this paper, we demonstrate the effectiveness and robustness of
this approach on different scenarios using hyperspectral imagery, where for most of these scenarios, the parameter
settings were fixed. We also investigated the performance of this suite over different times of the day, where the spectral
signatures of materials varied with relation to diurnal changes during the course of the day. Both visible to near infrared
and longwave imagery are used in this study.
Ground to ground, sensor to object viewing perspective presents a major challenge for autonomous window based object
detection, since object scales at this viewing perspective cannot be approximated. In this paper, we present a fully
autonomous parallel approach to address this challenge. Using hyperspectral (HS) imagery as input, the approach
features a random sampling stage, which does not require secondary information (range) about the targets; a parallel
process is introduced to mitigate the inclusion by chance of target samples into clutter background classes during random
sampling; and a fusion of results. The probability of sampling targets by chance within the parallel processes is modeled
by the binomial distribution family, which can assist on tradeoff decisions. Since this approach relies on the effectiveness
of its core algorithmic detection technique, we also propose a compact test statistic for anomaly detection, which is based
on a principle of indirect comparison. This detection technique has shown to preserve meaningful detections (genuine
anomalies in the scene) while significantly reducing the number of false positives (e.g. transitions of background
regions). To capture the influence of parametric changes using both the binomial distribution family and actual HS imagery, we conducted a series of rigid statistical experiments and present the results in this paper.
We present a multistage anomaly detection algorithm suite and suggest its application to chemical plume detection
using hyperspectral (HS) imagery. This approach is proposed to handle underlying difficulties (e.g., plume shape/scale
uncertainties) facing the development of autonomous anomaly detection algorithms. The approach features four stages:
(i) scene random sampling, which does not require secondary information (shape and scale) about potential effluent
plumes; (ii) anomaly detection; (iii) parallel processes, which are introduced to mitigate the inclusion by chance of
potential plume samples into clutter background classes; and (iv) fusion of results. The probabilities of taking plume
samples by chance within the parallel processes are modeled by the binomial distribution family, which can be used to
assist on tradeoff decisions. Since this approach relies on the effectiveness of its core anomaly detection technique, we
present a compact test statistic for anomaly detection, which is based on an asymmetric hypothesis test. This anomaly
detection technique has shown to preserve meaningful detections (genuine anomalies in the scene) while significantly
reducing the number of meaningless detections (transitions of background regions). Results of a proof of principle
experiment are presented using this approach to test real HS background imagery with synthetically embedded gas
plumes. Initial results are encouraging.
The Army has gained a renewed interest in hyperspectral (HS) imagery for military surveillance. As a result, a HS
research team has been established at the Army Research Lab (ARL) to focus exclusively on the design of innovative
algorithms for target detection in natural clutter. In 2005 at this symposium, we presented comparison performances
between a proposed anomaly detector and existing ones testing real HS data. Herein, we present some insightful results
on our general approach using analyses of statistical performances of an additional ARL anomaly detector testing 1500
simulated realizations of model-specific data to shed some light on its effectiveness. Simulated data of increasing
background complexity will be used for the analysis, where highly correlated multivariate Gaussian random samples
will model homogeneous backgrounds and mixtures of Gaussian will model non-homogeneous backgrounds. Distinct
multivariate random samples will model targets, and targets will be added to backgrounds. The principle that led to the
design of our detectors employs an indirect sample comparison to test the likelihood that local HS random samples
belong to the same population. Let X and Y denote two random samples, and let Z = X U Y, where U denotes the union.
We showed that X can be indirectly compared to Y by comparing, instead, Z to Y (or to X). Mathematical
implementations of this simple idea have shown a remarkable ability to preserve performance of meaningful detections
(e.g., full-pixel targets), while significantly reducing the number of meaningless detections (e.g., transitions of
background regions in the scene).
Remote collections of hyperspectral sensor imagery (HSI) often produce extremely large data sets that make storage and transmission difficult. Smart reduction of such a large data set has been a challenge. Automatic anomaly detection has been cited as a suitable method for remote processing of HSI, although automatic anomaly detection using HSI is itself a challenging problem owing to the impact of the atmosphere on spectral content and the variability of spectral signatures. In this paper, we present the performance of an anomaly detection algorithm known as an approximation to semiparametric (AsemiP) anomaly detector. This detector was conceptualized and developed in the Army Research Laboratory (ARL), where it became a favorite technique for the intended purpose using HSI. This detector uses fundamental theorems of large sample theory to implement a notion of indirect comparison, and it supersedes an earlier ARL technique that uses a semiparametric (SemiP) model, as a basis for statistical inference. The strength of both algorithms is that no prior knowledge is assumed about the target and/or the clutter statistics, albeit AsemiP has an advantage over SemiP of not using an iterative algorithm which is sensitive to arbitrary initial conditions. The AsemiP anomaly detector was tested using real hyperspectral data and compared to alternative techniques, including a benchmark approach, yielding some good results.
Automatic anomaly detection has been cited as a candidate method for remote processing of hyperspectral sensor imagery (HSI) to promote reduction of the extremely large data sets that make storage and transmission difficult. But automatic anomaly detection in HSI is itself a challenging problem owing to the impact of the atmosphere on spectral content and the variability of spectral signatures. In this paper, I propose to use the discriminant metric SAM (spectral angle mapper) and some of the advances made on the theory of semiparametric inference to design an anomaly detector that assumes no prior knowledge about the target and the clutter statistics. The detector will assume that the probability distribution function (pdf) of any object in a scene can be modeled as a distortion of a reference pdf. The maximum-likelihood method for the model is discussed along with its asymptotic behavior. The proposed anomaly detector is tested using real hyperspectral data and compared to a benchmark approach.
In the last ten years, many approaches have been proposed to address automatic target detection (ATD) in hyperspectral sensor imagery (HSI). Conspicuously missing from that list is a relatively unknown approach to time series analysis, called higher order zero-crossings (HOC). In this paper, we investigate HOC sequences and their application to target detection in HSI. HOC sequences are based on a surprisingly fruitful connection between bank filtering and signal zero-crossings. They are generated from the application of a bank of filters to a finite signal or time series having zero mean. The application of each filter from the bank changes the signal oscillation pattern and alters the zero-crossing count. Accordingly, the application of each member filter gives rise to a zero-crossing count. We consider the oscillation pattern changes, or variations thereof, as a function of frequency modulation (FM) that may be intrinsic to hyperspectral signatures and that apparently has never been exploited in the target community. Investigating FM functions in signatures triggered a natural behavior to investigate the existence of intrinsic AM (amplitude modulation) as well. Preliminary results indicate that intrinsic AM-FM characteristics of objects' hyperspectral signatures may be useful for target detection.
I promote using an alternative philosophy for the design of infrared-target-detection algorithms. This philosophy focuses on finding first and eliminating natural clutter from a scene, followed by finding and preserving candidate targets in that scene. The reverse approach is the most common adopted one in the infrared ATR (automatic target recognition) community. This alternative is appealing because it should significantly reduce the amount of out-of-context information to be processed by a classifier. I show how to apply sensor domain knowledge, common sense, and multivariate regression to the problem of infrared target detection. A proof-of-principle experiment and its results are discussed.
KEYWORDS: Digital signal processing, Monte Carlo methods, Systems modeling, Stochastic processes, Statistical analysis, Automatic target recognition, Mathematical modeling, Performance modeling, Target recognition, Surveillance
Higher-level decisions for AiTR (aided target recognition) networks have been made so far in our community in an ad-hoc fashion. Higher level decisions in this context do not involve target recognition performance per se, but other inherent output measures of performance, e.g., expected response time, long-term electronic memory required to achieve a tolerable level of image losses. Those measures usually require the knowledge associated with the steady-state, stochastic behavior of the entire network, which in practice is mathematically intractable. Decisions requiring those and similar output measures will become very important as AiTR networks are permanently deployed to the field. To address this concern, I propose to model AiTR systems as an open stochastic-process network and to conduct Monte Carlo simulations based on this model to estimate steady state performances. To illustrate this method, I modeled as proposed a familiar operational scenario and an existing baseline AiTR system. Details of the stochastic model and its corresponding Monte-Carlo simulation results are discussed in the paper.
The TESAR [Tactical Endurance Synthetic Aperture Radar (SAR)] system uses four algorithms in its three-stage algorithmic approach to the detection and identification of targets in continuous real-time, 1-ft-resolution, strip SAR image data. The first stage employs a multitarget detector with a built-in natural/cultural false-alarm mitigator. The second stage provides target hypotheses for the candidate targets and refines their angular pose. The third stage, consisting of two template-based algorithms, produces final target-identification decisions. This paper reviews the end- to-end ATR performance achieved by the TESAR system in preparation for a 1998 field demonstration at Aberdeen Proving Ground, Aberdeen, MD. The discussion includes an overview of the algorithm suite, the system's unique capabilities, and its overall performance against eight ground targets.
This paper describes a technique recently developed for target detection and false alarm reduction for the Predator unmanned aerial vehicle (UAV) tactical endurance synthetic aperture radar (TESAR) automatic target recognition (ATR) system. The approach does not attempt to label various objects in the SAR image (i.e., buildings, trees, roads); instead, it finds target-like characteristics in the image and compares their statistical/spatial relationship to larger structures in the scene. To do this, the approach merges the output of multiple CFAR (constant false alarm ratio) surfaces through a sequence of mathematical morphology tests. The output is further tested by a 'smart' clustering procedure, which performs an object- size test. With the use of these CFAR surfaces, a methodical sequence of morphological tests will find and retain large structures in the scene and eliminate cues that fall within these structures. The presence of supporting shadow downrange from the sensor is also used to eliminate objects with heights not typical to those of targets. Finally, a fast procedure performs a size test on elongated streaks. This procedure allows long objects to be smartly clustered as a single object while ensuring target proximity scenarios have no performance degradation. Application of this false alarm mitigator/detector to the Predator's SAR ATR algorithm suite produced a stunning reduction of one order of magnitude in the number of cues yielded by its baseline detector. This performance was consistent in scenes having natural and/or cultural clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.