We present results comparing black-box and physics-guided neural network architectures for hyperspectral target identification. Specifically, our physics-guided neural networks operate on at-sensor overhead long-wave infrared hyperspectral imaging radiances to predict not only the material class, but also physically-meaningful quantities of interest, such as the atmospheric transmission factor, the temperature, and the underlying material emissivity. In this way, our models are decoupled from traditional preprocessing routines and provide independently verifiable and interpretable quantities alongside the class predictions. We compare our physics-guided models to more traditional black-box models with respect to classification accuracy and representational similarity, and assess performance in predicting physical quantities across a variety of training schemes.
In remote sensing image analysis, change detection approaches typically compare two images captured by the same airborne or spaceborne sensor at different points in time. However, as airborne and spaceborne imaging platforms have become increasingly more accessible, the variety of sensor designs has grown in tandem. The ability to combine these multi-modal remote sensing images for change detection would provide a far more frequent view of the earth, but traditional approaches are challenged by the intrinsic data variation across sensor designs. The recently introduced multi-sensor anomalous change detection (MSACD) framework addresses this challenge by using a data-driven machine learning approach that can effectively account for differences in sensor modality and design, and does not require any signal resampling of the pixels. This flexible framework enables the use of satellite image pairs from different sensor platforms. Here, we perform experiments to further evaluate the efficacy of the MSACD change detection framework; these experiments include augmenting the images with engineered features that seek to increase the mutual information of the image backgrounds and, in turn, better emphasize the anomalous changes. While these initial results are demonstrated on same-sensor spectral data, the experiments naturally extend to the multi-sensor domain.
Machine learning approaches, such as deep neural networks, have shown recent success for target detection and identification problems in hyperspectral imagery. However, when deployed “in the wild,” there are no guarantees about the behavior of these black box algorithms when encountering new materials or environmental conditions that were not part of the training data. In addition, neural networks typically lack properties of linear identification methods in that their predictions tend to select a single class with high confidence even when there are multiple classes that could match a given input spectrum. To provide estimates of confidence in neural network predictions (i.e., target identifications) and to produce indicators of uncertainty, we apply stateof-the-art uncertainty quantification techniques to neural networks trained on hyperspectral data. Specifically, we assess recently proposed methods from the machine learning community including Monte Carlo dropout, ensembles of neural networks, and variational Bayesian neural networks. We report not only the accuracy of the resulting model-averaged networks on in-distribution data, but also the usefulness of uncertainty metrics on noisy or out-of-distribution data. We also compare ensemble neural network target identification results to a linear method on airborne long-wave infrared (LWIR) hyperspectral data with real targets. Finally, we offer some guidelines for applying these methods to hyperspectral target detection/identification problems.
Combining multiple satellite remote sensing sources provides a far richer, more frequent view of the earth than that of any single source; the challenge is in distilling these petabytes of heterogeneous sensor imagery into meaningful characterizations of the imaged areas. Meeting this challenge requires effective algorithms for combining multi-modal imagery over time to identify subtle but real changes among the intrinsic data variation. Here, we implement a joint-distribution framework for multi-sensor anomalous change detection (MSACD) that can effectively account for these differences in modality, and does not require any signal resampling of the pixel measurements. This flexibility enables the use of satellite imagery from different sensor platforms and modalities. We use multi-year construction of the SoFi Stadium in California as our testbed, and exploit synthetic aperture radar imagery from Sentinel-1 and multispectral imagery from both Sentinel-2 and Landsat 8. We show results for MSACD using real imagery with implanted, measurable changes, as well as real imagery with real, observable changes, including scaling our analysis over multiple years.
In this work we utilize generative adversarial networks (GANs) to synthesize realistic transformations for remote sensing imagery in the multispectral domain. Despite the apparent perceptual realism of the transformed images at a first glance, we show that a deep learning classifier can very easily be trained to differentiate between real and GAN-generated images, likely due to subtle but pervasive artifacts introduced by the GAN during the synthesis process. We also show that a very low-amplitude adversarial attack can easily fool the aforementioned deep learning classifier, although these types of attacks can be partially mitigated via adversarial training. Finally, we explore the features utilized by the classifier to differentiate real images from GAN-generated ones, and how adversarial training causes the classifier to focus on different, lower-frequency features.
As West Nile Virus (WNV) and St. Louis Encephalitis (SLE) become more prevalent across North America, there is an increased risk of fatal neuroinvasive cases. In order for public health officials to prepare for these cases and potentially intervene, the ability to forecast mosquito borne disease outbreaks is paramount. In practice, however, such vector borne diseases are notoriously difficult to predict due to their seemingly sporadic spatial and temporal outbreak patterns. Recent research has demonstrated that mosquito abundance is causally related to WNV/SLE prevalence, providing a practical starting point for developing mosquito-borne disease forecasting systems. When focusing on building mosquito population models, understanding the reproduction environment of Culex mosquitos (WNV and SLE's primary vectors) is key: they rely on warmth, water, and vegetation to reproduce. Previous work has shown that global-coverage multispectral imagery (MSI) (i.e., Landsat 8, Sentinel- 2) is a valuable resource for characterizing vegetation health as a predictor of mosquito population, but it is limited in that it may not provide the spatial resolution necessary to distinguish between, e.g., a well-fertilized lawn (poor Culex habitat) and a stand of trees (good Culex habitat). The backscatter information collected by synthetic aperture radar (SAR) imagery provides opportunity to distinguish between broader categories of vegetation type, potentially helping to fill this gap. This research uses publicly available global-coverage MSI and SAR imagery (Landsat 8, Sentinel-2, and Sentinel-1) to explore if vegetation type, in tandem with vegetation health, improves our ability to forecast mosquito populations. Vegetation characterization is done over the Greater Toronto Area from 2014 to 2017, and we derive weekly time series from MSI, spectral indices, and SAR for this time period. We then quantify the strength of vegetation health and type as a predictor of Culex abundance.
Global climate warming is rapidly reducing Arctic sea ice volume and extent. The associated perennial sea ice loss has economic and global security implications associated with Arctic Ocean navigability, since sea ice cover dictates whether an Arctic route is open to shipping. Thus, understanding changes in sea ice thickness, concentration and drift is essential for operation planning and routing. However, changes in sea ice cover on scales up to a few days and kilometers are challenging to detect and forecast; current sea ice models may not capture quickly-changing conditions on short timescales needed for navigation. Assimilating these predictive models requires frequent, high-resolution morphological information about the pack, which is operationally difficult. We suggest an approach to mitigate this challenge by using machine learning (ML) to interpret satellite-based synthetic aperture radar (SAR) imagery. In this study, we derive ML models for the analysis of SAR data to improve short-term local sea ice monitoring at high spatial resolutions, enabling more accurate analysis of Arctic navigability. We develop an algorithm/classifier that can analyze Sentinel-1 SAR imagery with the potential to inform operational sea ice forecasting models. We focus on detecting two sea ice features of interest to Arctic navigability: ridges and leads (fractures in the ice shelf). These can be considered local extremes in terms of ice thickness, a crucial parameter for navigation. We build models to detect these ice features using machine learning techniques. Both our ridge and lead detection models perform as well as, if not better than, state-of-the- art methods. These models demonstrate Sentinel-1's ability to capture sea ice conditions, suggesting the potential for Sentinel-1 global coverage imagery to inform sea ice forecasting models.
Outbreaks of West Nile Virus (WNV) and St. Louis Encephalitis (SLE) are projected to increase in frequency and intensity with climate change, underlining the need to develop better mosquito borne disease (MBD) fore- casting systems. Spread by Culex, WNV and SLE have seemingly random spatial and temporal outbreaks, making such outbreaks difficult to predict. However, recent studies have found that mosquito abundance and WNV/SLE transmission are strongly correlated, providing researchers with a foundation for the development MBD forecasting systems. Mosquito populations are impacted by several environmental variables, such as humidity, temperature, vegetation, and available breeding habitat. Mosquito-population forecasting models are beginning incorporate spectral data, such as the normalized difference vegetation index (NDVI). Vegetation is a crucial habitat for some mosquito species, and spectral data offers the best estimate of this habitat virtually anywhere on Earth. Additionally, vegetation offers a proxy for understanding how water flows across a landscape, a crucial consideration in urban areas with high landscape heterogeneity. This research explores how the spatial scale (extent) of multispectral imagery used in mosquito population prediction models influences mosquito population forecasts, specifically in the Greater Toronto Area. We derive three monthly time series of standard spectral indices from multispectral imagery over the Greater Toronto Area from 2004 to 2015; each time series is derived from images taken over the same locations, but using images taken over different spatial footprints. We then explore how spectral indices across the three spatial scales perform as predictors for combined Cx. restuans and Cx. pipiens populations.
The Reed-Xiaoli (RX) detector is used for identifying spatial anomalies in multispectral imagery, which are pixels whose spectra are anomalous relative to other pixels in a scene. The distribution of the spectra in an image is used to represent the background, and the anomalies are the pixels whose spectra deviate statistically from this distribution. While RX is used to identify spatial anomalies, in this research we have instead developed a method to capture temporal anomalies, or fleeting changes, such as a music festival in the desert. Using the annual Burning Man festival as a test case, we use a time series of multispectral images and iterate through each pixel, drawing the "background" distribution from a particular pixel location over time. Temporal RX (TRX) thus compares a pixel against itself through time, which enables us to capture normal seasonal trends and identify fleeting changes. We also describe a local window variant called Local Temporal RX (LTRX). Using k-means clustering and a new approach deemed Meta-RX, we investigate the nature of the temporal anomalies detected by TRX and LTRX to infer types and causes of change.
The "Viareggio 2013 Trial" is a hyperspectral dataset obtained from multiple overflights of the Italian city of Viareggio. Careful management of panels and vehicles in the scene enabled the development of valuable ground truth information. One pair of overflights occurred at different times on the same day, and another pair took place over different days. These data were used to compare and evaluate a variety of automated approaches for discovering anomalous changes. Co-registration of the images is acknowledged to be imprecise, so part of the challenge is to identify anomalous changes in a way that is robust to this misregistration. In particular, we employed a local co-registration adjustment (LCRA) algorithm to ameliorate the effects of misregistration; we employed non-maximal suppression (NMS) to take advantage of the discrete nature of the changes; and we used canonical correlation analysis (CCA) to reduce the dimension of our data. We found that, taken together, these improved the performance of the detectors in the low false alarm rate regime of operation.
In this work we demonstrate that generative adversarial networks (GANs) can be used to generate realistic pervasive changes in RGB remote sensing imagery, even in an unpaired training setting. We investigate some transformation quality metrics based on deep embedding of the generated and real images which enable visualization and understanding of the training dynamics of the GAN, and provide a useful measure in terms of quantifying how distinguishable the generated images are from real images. We also identify some artifacts introduced by the GAN in the generated images, which are likely to contribute to the differences seen between the real and generated samples in the deep embedding feature space even in cases where the real and generated samples appear perceptually similar.
Forest destruction is a main contributor to carbon emissions and loss of biodiversity, making it a matter of global importance. Due to the large global footprint and often inaccessibility of forested areas, remote sensing is one of the most valuable techniques for monitoring deforestation. Spectral imaging is typically favored for material classification of forested areas and identification of broad swaths of deforestation. However, spectral data can fall short in detecting more subtle destruction beneath the forest canopy. Radar remote sensing can help fill this gap, as it has the ability to penetrate through tree canopies such that pixels capture backscatter information from both the canopy and material beneath it. Synthetic aperture radar in particular can capture this information at fine spatial resolution, and techniques such as polarimetry and interferometry can be used to measure biomass and detect deforestation. In this study, we compare synthetic aperture radar data with multispectral data to improve characterization and identification of source signatures captured within a pixel, with specific consideration to detecting areas where thinning is happening beneath the forest canopy. We focus on identifying different types of forest thinning in the Valles Caldera, located in the Jemez Mountains of northern New Mexico. We apply anomalous change detection to a combination of data products derived from multispectral imagery and synthetic aperture radar to determine which combinations are most effective at identifying anomalous features of interest in thinning regions. We find that comparing phase change measured by synthetic aperture radar interferometry to differenced vegetation indices highlights anomalous relationships in the thinning region. When comparing multispectral reflectance to backscatter intensity measured by synthetic aperture radar, the most successful temporal comparisons contained synthetic aperture radar data during the thinning period. This suggests that synthetic aperture radar enhances detection of thinning practices via remote sensing, especially in regards to changes taking place beneath the tree canopy. These results were improved even further by segmenting the images according to vegetation coverage prior to applying anomalous change detection techniques.
Combining multiple satellite remote sensing sources provides a far richer, more frequent view of the earth than that of any single source; the challenge is in distilling these petabytes of heterogeneous sensor imagery into meaningful characterizations of the imaged areas. To meet this challenge requires effective algorithms for combining heterogeneous data to identify subtle but important changes among the intrinsic data variation. The major obstacle to using heterogeneous satellite data to monitor anomalous changes across time is this: subtle but real changes on the ground can be overwhelmed by artifacts that are simply due to the change in modality. Here, we implement a joint-distribution framework for anomalous change detection that can effectively "normalize" for these changes in modality, and does not require any phenomenological resampling of the pixel signal. This flexibility enables the use of satellite imagery from different sensor platforms and modalities. We use multi-year construction of the Los Angeles Stadium at Hollywood Park (in Inglewood, CA) as our testbed, and exploit synthetic aperture radar (SAR) imagery from Sentinel-1 and multispectral imagery from both Sentinel-2 and Landsat 8. We explore results for anomalous change detection between Sentinel-2 and Landsat 8 over time, and also show results for anomalous change detection between Sentinel-1 SAR imagery and Sentinel-2 multispectral imagery.
Using data for the states of Brazil, we construct a polynomial distributed lag model under different truncation lag criteria to predict reported dengue cases. Accurately predicting dengue cases provides the framework to develop forecasting models, which would provide public health professionals time to create targeted interventions for areas at high risk of dengue outbreaks. Others have shown that variables of interest such as temperature and vegetation can be used to predict dengue cases. These models did not detail how truncation lag criteria was chosen for their respective models when polynomial distributed lag was used. We explore current truncation lag selection methods used widely in the literature (marginal and minimized AIC) and determine which of these methods works best for our given data set. While minimized AIC truncation lag selection produced the best fit to our data, this method used substantially more data to inform its prediction compared to the marginal truncation lag selection method. Finally, the following variables were found to be significant predictors of dengue in this region: normalized difference vegetation index (NDVI), green-based normalized difference water index (NDWI), normalized burn ratio (NBR), and temperature. These best predictors were derived from multispectral remote sensing imagery as well as temperature data.
We describe and compare two approaches to solid subpixel target detection in hyperspectral imagery. The first approach requires explicit models for both the target and the background, and employs a generalized likelihood ratio in order to obtain a detector that is optimized to those specific models. When this approach is most successful, a closed-form solution is obtained that permits the detector to be efficiently applied. A specific example of this approach is outlined in some detail, leading to the elliptically-contoured finite-target matched filter (EC-FTMF), a variant of the classical FTMF algorithm that uses a multivariate t-distribution instead of a Gaussian as the model for the background. The second approach also requires an explicit model of the target, but does not need a model for the background. In this second approach, matched pairs of data samples are created: for each pixel in the original hyperspectral image, a corresponding pixel is generated by implanting the target into the original pixel. These matched pairs are used as training data for a machine learning algorithm to classify pixels as either non-target or target. Here we use a support vector machine, but the matched pair machine learning (MPML) framework does not restrict the choice of classifier type. Detectors using both approaches are applied both to simulated data (with Gaussian and with multivariate fit distributed backgrounds) and to real hyperspectral data with known, referenced targets.
In hyperspectral target detection, one must contend with variability in both target materials and background clutter. While most algorithms focus on the background clutter, there are some materials for which there is substantial variability in the signatures of the target. When multiple signatures can be used to describe a target material, subspace detectors are often the detection algorithm of choice. However, as the number of variable target spectra increases, so does the size of the target subspace spanned by these spectra, which in turn increases the number of false alarms. Here, we propose a modification to this approach, wherein the target subspace is instead a constrained subspace, or a simplex without the sum-to-one constraint. We derive the simplex adaptive matched filter (simplex AMF) and the simplex adaptive cosine estimator (simplex ACE), which are constrained basis adaptations of the traditional subspace AMF and subspace ACE detectors. We present results using simplex AMF and simplex ACE for variable targets, and compare their performances against their subspace counterparts. Our primary interest is in the simplex ACE detector, and as such, the experiments herein seek to evaluate the robustness of simplex ACE, with simplex AMF included for comparison. Results are shown on hyperspectral images using both implanted and ground-truthed targets, and demonstrate the robustness of simplex ACE to target variability.
We investigate the detection of opaque targets in cluttered multi/hyper-spectral imagery, using a local background estimation model. Unlike transparent "additive-model" targets (like gas-phase plumes), these are solid "replacement-model" targets, which means that the observed spectrum is a linear combination of the target signature and the background signature. Pixels with stronger targets are associated with correspondingly weaker backgrounds, and background estimators can over-estimate the background in a target pixel. In particular, "subtracting the background" (which generalizes the usual notion of subtracting the mean) to produce a residual image can actually have deleterious effect. We examine an adaptive partial background subtraction scheme, and evaluate its utility for the detection of replacement-model targets.
We investigate a constrained subspace detector that models the target spectrum as a positive linear combination of multiple reference spectra. This construction permits the input of a large number of target reference spectra, which enables us to go after even highly variable targets without being overwhelmed by false alarms. This constrained basis approach led to the derivation of both the simplex adaptive matched filter (Simplex AMF) and simplex adaptive cosine estimator (Simplex ACE) detectors. Our primary interest is in Simplex ACE, and as such, the experiments in this paper focus on evaluating the robustness of Simplex ACE (with Simplex AMF included for comparison). We present results using large spectral libraries implanted into real hyperspectral data, and compare the performance of our simplex detectors against their traditional subspace detector counterparts. In addition to a large (i.e., several hundred spectra) target library, we induce further target variability by implanting subpixel targets with both added noise and scaled illumination. As a corollary, we also show that in the limit as the target subspace approaches the image space, Subspace AMF becomes the RX anomaly detector.
KEYWORDS: Target detection, Hyperspectral target detection, Hyperspectral imaging, Detection and tracking algorithms, Solids, Particles, Sensors, Signal processing, Chemical analysis, RGB color model
In discriminating target materials from background clutter in hyperspectral imagery, one must contend with variability in both. Most algorithms focus on the clutter variability, but for some materials there is considerable variability in the spectral signatures of the target. This is especially the case for solid target materials, whose signatures depend on morphological properties (particle size, packing density, etc.) that are rarely known a priori. In this paper, we investigate detection algorithms that explicitly take into account the diversity of signatures for a given target. In particular, we investigate variable target detectors when applied to new representations of the hyperspectral data: a manifold learning based approach, and a residual based approach. The graph theory and manifold learning based approach incorporates multiple spectral signatures of the target material of interest; this is built upon previous work that used a single target spectrum. In this approach, we first build an adaptive nearest neighbors (ANN) graph on the data and target spectra, and use a biased locally linear embedding (LLE) transformation to perform nonlinear dimensionality reduction. This biased transformation results in a lower-dimensional representation of the data that better separates the targets from the background. The residual approach uses an annulus based computation to represent each pixel after an estimate of the local background is removed, which suppresses local backgrounds and emphasizes the target-containing pixels. We will show detection results in the original spectral space, the dimensionality-reduced space, and the residual space, all using subspace detectors: ranked spectral angle mapper (rSAM), subspace adaptive matched filter (ssAMF), and subspace adaptive cosine/coherence estimator (ssACE). Results of this exploratory study will be shown on a ground-truthed hyperspectral image with variable target spectra and both full and mixed pixel targets.
Algorithms for spectral analysis commonly use parametric or linear models of the data. Research has shown, however, that hyperspectral data -- particularly in materially cluttered scenes -- are not always well-modeled by statistical or linear methods. Here, we propose an approach to hyperspectral target detection that is based on a graph theory model of the data and a manifold learning transformation. An adaptive nearest neighbor (ANN) graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation. The artificial target manifold helps to guide the separation of the target data from the background data in the new, transformed manifold coordinates. Then, target detection is performed in the manifold space using Spectral Angle Mapper. This methodology is an improvement over previous iterations of this approach due to the incorporation of ANN, the artificial target manifold, and the choice of detector in the transformed space. We implement our approach in a spatially local way: the image is delineated into square tiles, and the detection maps are normalized across the entire image. Target detection results will be shown using laboratory-measured and scene-derived target data from the SHARE 2012 collect.
Hyperspectral images comprise, by design, high dimensional image data. However, research has shown that for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of non-linear manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data. With graph theory and manifold learning based models, the only assumption is that the data reside on an underlying manifold. In previous publications, we have shown that manifold coordinate approximation using locally linear embedding (LLE) is a viable pre-processing step for target detection with the Adaptive Cosine/Coherence Estimator (ACE) algorithm. Here, we improve upon that methodology using a more rigorous, data-driven implementation of LLE that incorporates the injection of a cloud" of target pixels and the Spectral Angle Mapper (SAM) detector. The LLE algorithm, which holds that the data is locally linear, is typically governed by a user defined parameter k, indicating the number of nearest neighbors to use in the initial graph model. We use an adaptive approach to building the graph that is governed by the data itself and does not rely upon user input. This implementation of LLE can yield greater separation between the target pixels and the background pixels in the manifold space. We present an analysis of target detection performance in the manifold coordinates using scene-derived target spectra and laboratory-measured target spectra across two different data sets.
In high dimensional data, manifold learning seeks to identify the embedded lower-dimensional, non-linear mani- fold upon which the data lie. This is particularly useful in hyperspectral imagery where inherently m-dimensional data is often sparsely distributed throughout the d-dimensional spectral space, with m << d. By recovering the manifold, inherent structures and relationships within the data – which are not typically apparent otherwise – may be identified and exploited. The sparsity of data within the spectral space can prove challenging for many types of analysis, and in particular with target detection. In this paper, we propose using manifold recovery as a preprocessing step for spectral target detection algorithms. A graph structure is first built upon the data and the transformation into the manifold space is based upon that graph structure. Then, the Adaptive Co- sine/Coherence Estimator (ACE) algorithm is applied. We present an analysis of target detection performance in the manifold space using scene-derived target spectra from two different hyperspectral images.
The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.
Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels
whose material content is incongruous with the background material in the scene. Typically, the application
involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing
these algorithms is determining which pixels initially constitute the background material within an image. The
topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological
model of the background in the image scene, and uses codensity to measure deviation from this background. In
TAD, the initial graph theory structure of the image data is created by connecting an edge between any two
pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of
proximity graph is among the most well-known approaches to building a geometric graph based on a given set of
data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative
test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor
graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in
TAD.
Spectral image complexity is an ill-defined term that has been addressed previously in terms of dimensionality, multivariate normality, and other approaches. Here, we apply the concept of the linear mixture model to the question of spectral image complexity at spatially local scales. Essentially, the "complexity" of an image region is related to the volume of a convex set enclosing the data in the spectral space. The volume is estimated as a function of increasing dimensionality (through the use of a set of endmembers describing the data cloud) using the Gram Matrix approach. It is hypothesized that more complex regions of the image are composed of multiple, diverse materials and will thus occupy a larger volume in the hyperspace. The ultimate application here is large area image search without a priori information regarding the target signature. Instead, image cues will be provided based on local, relative estimates of the image complexity. The technique used to estimate the spectral image complexity is described and results are shown for representative image chips and a large area flightline of reflective hyperspectral imagery. The extension to the problem of large area search will then be described and results are shown for a 4-band multispectral image.
Historically in change detection, statistically based methods have been used. However, as the spatial resolution of spectral images improves, the data no longer maintain a Gaussian distribution, and some assumptions about the data - and subsequently all algorithms based upon those statistical assumptions - fail. Here we present the Simplex Volume Estimation algorithm (SVE), which avoids these potential hindrances by taking a geometrical approach. In particular, we employ the linear mixture model to approximate the convex hull enclosing the data through identification of the simplex vertices (known as endmembers). SVE begins by processing an image and tiling it into squares. Next, SVE iterates through the tiles and for each set of pixels it identifies the number of corners (as vectors) that define the simplex of that set of data. For each tile, it then iterates through the increasing dimensionality, or number of endmembers, while every time calculating the volume of the simplex that is defined by that number of endmembers. When the volume is calculated in a dimension that is higher than that of the inherent dimensionality of the data, the volume will theoretically drop to zero. This value is indicative of the inherent dimensionality of the data as represented by the convex hull. Further, the volume of the simplex will fluctuate when a new material is introduced to the dataset, indicating a change in the image. The algorithm then analyzes the volume function associated with each tile and assigns the tile a metric value based on that function. The values of these metrics will be compared by using hyperspectral imagery collected from different platforms over experimental setups with known changes between flights. Results from these tests will be presented along with a path forward for future research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.