In this paper, we introduce a new technique that relates the split of polarization states through various scattering
mechanisms. We use the finite-difference time domain (FDTD) method in our computations since, by its nature, FDTD
can model an ultrawide band source and can separate the various scattering mechanisms by exploiting causality. The key
idea is that, once a non-monochromatic wave is incident upon a scattering object, the various spectral components will
be differently depolarized upon scattering depending upon the shape and material composition of the object. In the case
studied here, all of the impinging spectral components are co-polarized (whereas arbitrary polarization distributions are
permitted more generally). Fundamentally, we are exploring a concept similar to the split or quantization of energy states
in quantum mechanics. We first introduce the concept of the quantization of polarization states, and then we explain the
formulation of the "State Space Matrix" in relationship to the polarization gaps. Once the technique is introduced, we
demonstrate its potential applications to realistic problems such as materials detection.
Performance of Automatic Target Recognition (ATR) algorithms for Synthetic Aperture Radar (SAR) systems relies
heavily on the system performance and specifications of the SAR sensor. A representative multi-stage SAR ATR
algorithm [1, 2] is analyzed across imagery containing phase errors in the down-range direction induced during the
transmission of the radar's waveform. The degradation induced on the SAR imagery by the phase errors is
measured in terms of peak phase error, Root-Mean-Square (RMS) phase error, and multiplicative noise. The ATR
algorithm consists of three stages: a two-parameter CFAR, a discrimination stage to reduce false alarms, and a
classification stage to identify targets in the scene. The end-to-end performance of the ATR algorithm is quantified
as a function of the multiplicative noise present in the SAR imagery through Receiver Operating Characteristic
(ROC) curves. Results indicate that the performance of the ATR algorithm presented is robust over a 3dB change in
multiplicative noise.
Template-based classification algorithms used with synthetic aperture radar (SAR) automatic target recognition (ATR)
degrade in performance when used with spatially mismatched imagery. The degradation, caused by a spatial mismatch
between the template and image, is analyzed to show acceptable tolerances for SAR systems. The mismatch between
the image and template is achieved by resampling the test imagery to different pixel spacings. A consistent SAR dataset
is used to examine pixel spacings between 0.1069 and 0.2539 meters with a nominal spacing of 0.2021 meters.
Performance degradation is observed as the pixel spacing is adjusted, Small amounts of variation in the pixel spacing
cause little change in performance and allow design engineers to set reliable tolerances. Alternatively, the results show
that using templates and images collected from slightly different sensor platforms is a very real possibility with the
ability to predict the classification performance.
A multi-stage Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) system is analyzed across images
of various pixel areas achieved by both square and non-square resolution. Non-square resolution offers the ability to
achieve finer resolution in the range or cross-range direction with a corresponding degradation of resolution in the cross-range
or range direction, respectively. The algorithms examined include a standard 2-parameter Constant False Alarm
Rate (CFAR) detection stage, a discrimination stage, and a template-based classification stage. Performance for each
stage with respect to both pixel area and square versus non-square resolution is shown via cascaded Receiver Operating
Characteristic (ROC) curves. The results indicate that, for fixed pixel areas, non-square resolution imagery can achieve
statistically similar performance to square pixel resolution imagery in a multi-stage SAR ATR system.
This investigation discusses the challenge of target classification in terms of intrinsic dimensionality estimation and selection of appropriate feature manifolds with object-specific classifier optimization. The feature selection process will be developed via nonlinear characterization and extraction of the target-conditional manifolds derived from the training data. We investigate defining the feature space used for classification as a class-conditioned nonlinear embedding, i.e., each training and test image is embedded in a target-specific embedding and the resultant embeddings are used for statistical characterization. We compare and contrast this novel embedding technique with Principal Component Analysis. The α-Jensen Entropy Difference measure is used to quantify the object-conditioned separation between the target distributions in the feature spaces. We discuss and demonstrate the effect of feature space extraction on classification efficacy.
The 'curse of dimensionality' has limited the application of statistical modeling techniques to low-dimensional spaces, but typical data usually resides in high-dimensional spaces (at least initially, for instance images represented as arrays of pixel values). Indeed, approaches such as Principal Component Analysis and Independent Component Analysis attempt to extract a set of meaningful linear projections while minimizing interpoint distance distortions. The counterintuitive yet effective random projections approach of Johnson and Lindenstrauss defines a sample-based dimensionality reduction technique with probabilistically provable distortion bounds. We investigate and report on the relative efficacy of two random projection techniques for Synthetic Aperture Radar images in a classification setting.
Selection of the kernel parameters is critical to the performance of Support Vector Machines (SVMs), directly impacting the generalization and classification efficacy of the SVM. An automated procedure for parameter selection is clearly desirable given the intractable problem of exhaustive search methods. The authors' previous work in this area involved analyzing the SVM training data margin distributions for a Gaussian kernel in order to guide the kernel parameter selection process. The approach entailed several iterations of training the SVM in order to minimize the number of support vectors. Our continued investigation of unsupervised kernel parameter selection has led to a scheme employing selection of the parameters before training occurs. Statistical methods are applied to the Gram matrix to determine kernel optimization in an unsupervised fashion. This preprocessing framework removes the requirement for iterative SVM training. Empirical results will be presented for the "toy" checkerboard and quadboard problems.
Infrared imagers used to acquire data for automatic target recognition are inherently limited by the physical properties of their components. Fortunately, image super-resolution techniques can be applied to overcome the limits of these imaging systems. This increase in resolution can have potentially dramatic consequences for
improved automatic target recognition (ATR) on the resultant higher-resolution images. We will discuss superresolution techniques in general and specifically review the details of one such algorithm from the literature suited to real-time application on forward-looking infrared (FLIR) images. Following this tutorial, a numerical analysis of the algorithm applied to synthetic IR data will be presented, and we will conclude by discussing the implications of the analysis for improved ATR accuracy.
Support Vector Machines (SVMs) have generated excitement and interest in the pattern recognition community due to their generalization, performance, and ability to operate in high dimensional feature spaces. Although SVMs are generated without the use of user-specified models, required hyperparameters, such as Gaussian kernel width, are usually user-specified and/or experimentally derived. This effort presents an alternative approach for the selection of the Gaussian kernel width via analysis of the distributional characteristics of the training data projected on the 'trained' SVM (margin values). The efficacy of a particular kernel width can be visually determined via one-dimensional density estimate plots of the training data margin values. Projecting the data onto the SVM hyperplane allows the one-dimensional analysis of the data from the viewpoint of the 'trained' SVM. The effect of kernel parameter selection on class-conditional margin distributions is demonstrated in the one-dimensional projection subspace, and a criterion for unsupervised optimization of kernel width is discussed. Empirical results are given for two classification problems: the 'toy' checkerboard problem and a high dimensional classification problem using simulated High-Resolution Radar (HRR) targets projected into a wavelet packet feature space.
The local discriminant bases (LDB) method is a powerful algorithmic framework that was originally developed by Coifman and Saito in 1994 as a technique for analyzing object classification problems. LDB is a feature extraction algorithm which selects a best-basis from a library of orthogonal bases based on relative entropy or a similar metric. The localized nature of these orthogonal basis functions often results in features that are easier to interpret and more intuitive than those obtained form more conventional methods. An evaluation of the best-basis technique using LDB was conducted with IR sensor data. In particular, our data set consisted of the intensity fluctuations of subpixel targets collected don a focal plane array. This 1D dat set provides a useful benchmark against current feature estimation/extraction algorithms as well as preparation for the much more difficult 2D problem. Significantly, LDB is an automated procedure. This has a number of potential advantages, including the ability to: (1) easily handle an increased threat set; and (2) significantly improve the productivity of the feature estimation 'expert' by removing them from the mechanics of the classification process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.