PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10427 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whilst recent years have witnessed the development and exploitation of operational Earth Observation (EO) satellite constellation data, the valorisation of historical archives has been a challenge. The European Space Agency (ESA) Landsat Multi Spectral Scanner (MSS) products cover Greenland, Iceland, Continental Europe and North Africa represent an archive of over 600,000 processed Level 1 (L1) scenes that will accompany around 1 million ESA Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) products already available. ESA began acquiring MSS data in 1975 and it is well known that this dataset can be degraded due to missing data and a loss in accuracy. For these reasons, the content of the product format has been reviewed and the ESA Landsat processing baseline significantly updated to ensure products are fit for user purposes. This paper presents the new MSS product format including the updated metadata parameters for error traceability, and the specification of the Quality Assurance Band (BQA) engineered to allow the best pixel selection and also the application of image restoration techniques. This paper also discusses major improvements applied to the radiometric and geometric processing. For the benefits of the community, ESA is now able to maximize the number of L1 MSS products that can potentially be generated from the raw Level 0 (L0) data and ensure the highest possible data quality is reached. Also, by improving product format, processing and adding a pixel based quality band, the MSS archive becomes interoperable with recently reprocessed Landsat data and that from live missions by way of assuring product quality on a pixel basis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the frame of the Copernicus programme, ESA has developed and launched the Sentinel-2 optical imaging mission that delivers optical data products designed to feed downstream services mainly related to land monitoring, emergency management and security. The Sentinel-2 mission is the constellation of two polar orbiting satellites Sentinel-2A and Sentinel-2B, each one equipped with an optical imaging sensor MSI (Multi-Spectral Instrument). Sentinel-2A was launched on June 23rd, 2015 and Sentinel-2B followed on March 7th, 2017. With the beginning of the operational phase the constellation of both satellites enable image acquisition over the same area every 5 days or less. To use unique potential of the Sentinel-2 data for land applications and ensure the highest quality of scientific exploitation, accurate correction of satellite images for atmospheric effects is required. Therefore the atmospheric correction processor Sen2Cor was developed by Telespazio VEGA Deutschland GmbH on behalf of ESA. Sen2Cor is a Level-2A processor which main purpose is to correct single-date Sentinel-2 Level-1C Top-Of-Atmosphere (TOA) products from the effects of the atmosphere in order to deliver a Level-2A Bottom-Of-Atmosphere (BOA) reflectance product. Additional outputs are an Aerosol Optical Thickness (AOT) map, a Water Vapour (WV) map and a Scene Classification (SCL) map with Quality Indicators for cloud and snow probabilities. Telespazio France and DLR have teamed up in order to provide the calibration and validation of the Sen2Cor processor. Here we provide an overview over the Sentinel-2 data, processor and products. It presents some processing examples of Sen2Cor applied to Sentinel-2 data, provides up-to-date information about the Sen2Cor release status and recent validation results at the time of the SPIE Remote Sensing 2017.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generation of high-resolution Digital Elevation Model (DEM) is essential for space missions like an investigation of the topographic feature, a selection of landing site or a path planning of rover. Fusion of image which contains high-frequency information on the terrain surface and depth information which gives sparse information is an important topic in the generation of DEM. In this paper, the photometric stereo method is used to generate the surface normal information of shadowed region in the crater. Then, the surface normal is fused with interpolated DEM from ground truth DEM. The fusion enhances the lateral resolution of interpolated DEM and generates details of the ground truth DEM which is not shown in interpolated DEM. This method can be adapted to enhance DEM for near polar region, which has a large variation in shadow and has many image dataset due to orbiter's flight orbit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this letter, we show that pansharpening of visible/near-infrared (VNIR) bands takes advantage from a correction of the atmospheric path-radiance term during the fusion process. This holds whenever the fusion mechanism emulates the radiative transfer model ruling the acquisition of the Earth’s surface from space, that is, for methods exploiting a contrastbased injection model of spatial details extracted from the panchromatic (Pan) image into the interpolated multispectral (MS) bands. Such methods are high-pass modulation (HPM), Brovey transform (BT), synthetic variable ratio (SVR), UNB pansharp, smoothing filter-based intensity modulation (SFIM) and spectral distortion minimization (SDM). The path radiance should be estimated and subtracted from each band before the product by Pan is accomplished and added back after. Both empirical and model-based estimation techniques of MS path radiances are compared within the framework of optimized SVR and HPM algorithms. Simulations carried out on QuickBird and IKONOS data highlight that the atmospheric correction of MS before fusion is always beneficial, especially on vegetated areas and in terms of spectral quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral Images (HSI) are usually affected by different type of noises such as Gaussian and non-Gaussian. The existing noise can directly affect the classification, unmixing and superresolution analyses. In this paper, the effect of denoising on superresolution of HSI is investigated. First a denoising method based on shearlet transform is applied to the low-resolution HSI in order to reduce the effect of noise, then the superresolution method based on Bayesian sparse representation is used. The proposed method is applied to real HSI dataset. The obtained results of the proposed method in comparison with some of the state-of-the-art superresolution methods show that the proposed method significantly increases the spatial resolution and decreases the noise effects efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the application of super resolution (SR, restoring a high spatial resolution image from a series of low resolution images of the same scene) techniques to GaoFen(GF)-4, which is the most advanced geostationaryorbit earth observing satellite in China, remote sensing images is investigated and tested. SR has been a hot research area for decades, but one of the barriers of applying SR in remote sensing community is the time slot between those low resolution (LR) images acquisition. In general, the longer the time slot, the less reliable the reconstruction. GF-4 has the unique advantage of capturing a sequence of LR of the same region in minutes, i.e. working as a staring camera from the point view of SR. This is the first experiment of applying super resolution to a sequence of low resolution images captured by GF-4 within a short time period. In this paper, we use Maximum a Posteriori (MAP) to solve the ill-conditioned problem of SR. Both the wavelet transform and the curvelet transform are used to setup a sparse prior for remote sensing images. By combining several images of both the BeiJing and DunHuang regions captured by GF-4 our method can improve spatial resolution both visually and numerically. Experimental tests show that lots of detail cannot be observed in the captured LR images, but can be seen in the super resolved high resolution (HR) images. To help the evaluation, Google Earth image can also be referenced. Moreover, our experimental tests also show that the higher the temporal resolution, the better the HR images can be resolved. The study illustrates that the application for SR to geostationary-orbit based earth observation data is very feasible and worthwhile, and it holds the potential application for all other geostationary-orbit based earth observing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The general aim of this work was to elaborate efficient and reliable aggregation method that could be used for creating a land cover map at a global scale from multitemporal satellite imagery. The study described in this paper presents methods for combining results of land cover/land use classifications performed on single-date Sentinel-2 images acquired at different time periods. For that purpose different aggregation methods were proposed and tested on study sites spread on different continents. The initial classifications were performed with Random Forest classifier on individual Sentinel-2 images from a time series. In the following step the resulting land cover maps were aggregated pixel by pixel using three different combinations of information on the number of occurrences of a certain land cover class within a time series and the posterior probability of particular classes resulting from the Random Forest classification. From the proposed methods two are shown superior and in most cases were able to reach or outperform the accuracy of the best individual classifications of single-date images. Moreover, the aggregations results are very stable when used on data with varying cloudiness. They also enable to reduce considerably the number of cloudy pixels in the resulting land cover map what is significant advantage for mapping areas with frequent cloud coverage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Exploitation of temporal series of hyperspectral images is a relatively new discipline that has a wide variety of possible applications in fields like remote sensing, area surveillance, defense and security, search and rescue and so on. In this work, we discuss how images taken at two different times can be processed to detect changes caused by insertion, deletion or displacement of small objects in the monitored scene. This problem is known in the literature as anomalous change detection (ACD) and it can be viewed as the extension, to the multitemporal case, of the well-known anomaly detection problem in a single image. In fact, in both cases, the hyperspectral images are processed blindly in an unsupervised manner and without a-priori knowledge about the target spectrum. We introduce the ACD problem using an approach based on the statistical decision theory and we derive a common framework including different ACD approaches. Particularly, we clearly define the observation space, the data statistical distribution conditioned to the two competing hypotheses and the procedure followed to come with the solution. The proposed overview places emphasis on techniques based on the multivariate Gaussian model that allows a formal presentation of the ACD problem and the rigorous derivation of the possible solutions in a way that is both mathematically more tractable and easier to interpret. We also discuss practical problems related to the application of the detectors in the real world and present affordable solutions. Namely, we describe the ACD processing chain including the strategies that are commonly adopted to compensate pervasive radiometric changes, caused by the different illumination/atmospheric conditions, and to mitigate the residual geometric image co-registration errors. Results obtained on real freely available data are discussed in order to test and compare the methods within the proposed general framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging instrument performance, especially spectral response parameters, may change when the sensors work in-flight due to vibrations, temperature and pressure changes compared with the laboratory status. In order to derive valid information from imaging data, accurate spectral calibration accompanied by uncertainty analysis to the data must be made. The purpose of this work is to present a process to estimate the uncertainties of in-flight spectral calibration parameters by analyzing the sources of uncertainty and calculating their sensitivity coefficients. In the in-flight spectral calibration method, the band-center and bandwidth determinations are made by correlating the in-flight sensor measured radiance with reference radiance. In this procedure, the uncertainty analysis is conducted separately for three factors: (a) the radiance calculated from imaging data; (b) the reference data; (c) the matching process between the above two items. To obtain the final uncertainty, contributions due to every impact factor must be propagated through this process. Analyses have been made using above process for the Hyperion data. The results show that the shift of band-center in the oxygen absorption (about 762nm), compared with the value measured in the lab, is less than 0.9nm with uncertainties ranging from 0.063nm to 0.183nm related to spatial distribution along the across-track direction of the image, the change of bandwidth is less than 1nm with uncertainties ranging from 0.066nm to 0.166nm. This results verify the validity of the in-flight spectral calibration process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The hyperspectral imagery is formed by a several narrow and continuous bands covering different regions of the electromagnetic spectrum, such as spectral bands of the visible, near infrared and far infrared. Hyperspectral imagery provides extremely higher spectral resolution than high spatial resolution multispectral imagery, improving the detection capability of terrestrial objects. The greatest difficulty found in the hyperspectral processing is the high dimensionality of these data, which brings out the 'Hughes' phenomenon. This phenomenon specifies that the size of training set required for a given classification increases exponentially with the number of spectral bands. Therefore, the dimensionality of the hyperspectral data is an important drawback when applying traditional classification or pattern recognition approaches to this hyperspectral imagery. In our context, the dimensionality reduction is necessary to obtain accurate thematic maps of natural protected areas. Dimensionality reduction can be divided into the feature-selection algorithms and featureextraction algorithms. We focus the study in the feature-extraction algorithms like Principal Component Analysis (PCA), Minimum Noise Fraction (MNF) and Independent Component Analysis (ICA). After a review of the state-of-art, it has been observed a lack of a comparative study on the techniques used in the hyperspectral imagery dimensionality reduction. In this context, our objective was to perform a comparative study of the traditional techniques of dimensionality reduction (PCA, MNF and ICA) to evaluate their performance in the classification of high spatial resolution imagery of the CASI (Compact Airborne Spectrographic Imager) sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a new unsupervised classifier for Hyperspectral images (HSI) using image segmentation and spectral unmixing. In the proposed method, first the number of classes is considered equal to the number of endmembers. Second, the endmember matrix is defined. Third, the abundance fraction maps are extracted. Fourth, an initial groundtruth is constructed by choosing the location of maximum absolute value of abundance fractions corresponding to each pixel. Fifth, each pixel which has the same eight neighboring (vertical, horizontal and diagonal) class is a good candidate for training data and after that some of good candidate pixels are randomly selected as final training data and remaining pixels are considered as testing data. Finally, support vector machine is applied to the HSI and initial groundtruth is iteratively repeated. In order to validate the efficiency of the proposed algorithm, two real HSI datasets are used. The obtained classification results are compared with some of state-of-the-art initial algorithms and the classification accuracy of the proposed method is close to the supervised algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of hyperspectral image (HSI) pixel partitioning using nearest neighbor - density-based (NN-DB) clustering methods. NN-DB methods are able to cluster objects without specifying the number of clusters to be found. Within the NN-DB approach, we focus on deterministic methods, e.g. ModeSeek, knnClust, and GWENN (standing for Graph WatershEd using Nearest Neighbors). These methods only require the availability of a k-nearest neighbor (kNN) graph based on a given distance metric. Recently, a new DB clustering method, called Density Peak Clustering (DPC), has received much attention, and kNN versions of it have quickly followed and showed their efficiency. However, NN-DB methods still suffer from the difficulty of obtaining the kNN graph due to the quadratic complexity with respect to the number of pixels. This is why GWENN was embedded into a multiresolution (MR) scheme to bypass the computation of the full kNN graph over the image pixels. In this communication, we propose to extent the MR-GWENN scheme on three aspects. Firstly, similarly to knnClust, the original labeling rule of GWENN is modified to account for local density values, in addition to the labels of previously processed objects. Secondly, we set up a modified NN search procedure within the MR scheme, in order to stabilize of the number of clusters found from the coarsest to the finest spatial resolution. Finally, we show that these extensions can be easily adapted to the three other NN-DB methods (ModeSeek, knnClust, knnDPC) for pixel clustering in large HSIs. Experiments are conducted to compare the four NN-DB methods for pixel clustering in HSIs. We show that NN-DB methods can outperform a classical clustering method such as fuzzy c-means (FCM), in terms of classification accuracy, relevance of found clusters, and clustering speed. Finally, we demonstrate the feasibility and evaluate the performances of NN-DB methods on a very large image acquired by our AISA Eagle hyperspectral imaging sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In signal and image processing principal component analysis (PCA) is often used for dimensionality reduction and feature extraction in pre-processing steps to for example classification.
In remote sensing image analysis PCA is often replaced by maximum autocorrelation factor
(MAF) or minimum noise fraction (MNF) analysis. This is done because MAF and MNF analyses incorporate spatial information in the orthogonalization of the multivariate data which is conceptually more satisfactory and which typically gives better results.
In this contribution, autocorrelation between the multivariate data and a spatially shifted version of the same data in the MAF analysis is replaced by the information theoretical, entropy and Kullback-Leibler divergence based measure mutual information. This potentially gives a more detailed decomposition of the data. Also, the orthogonality between already found components and components of higher order requested in the MAF analysis is replaced by a requirement of minimum mutual information between components.
The sketched methods are used on the well-known AVIRIS (https://aviris.jpl.nasa.gov/) Indian Pines data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new supervised classification method for hyperspectral image is introduced. In the proposed method first, 2D non-subsampled shearlet transform is applied to each spectral band of hyperspectral images. After that, minimum noise fraction transform reduces the dimension of shearlet coefficient sub-bands. Finally, the support vector machine is used for classifying the hyperspectral images based on the extracted features. In order to validate the efficiency of the proposed algorithm, two real hyperspectral image datasets are selected. The obtained classification results are compared with some of the state-of-the-art classification algorithms and the proposed method has reached the highest classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The very high spectral resolution of Hyperspectral Images (HSIs) enables the identification of materials with subtle differences and the extraction subpixel information. However, the increasing of spectral resolution often implies an increasing in the noise linked with the image formation process. This degradation mechanism limits the quality of extracted information and its potential applications. Since HSIs represent natural scenes and their spectral channels are highly correlated, they are characterized by a high level of self-similarity and are well approximated by low-rank representations. These characteristic underlies the state-of-the-art in HSI denoising. However, in presence of rare pixels, the denoising performance of those methods is not optimal and, in addition, it may compromise the future detection of those pixels. To address these hurdles, we introduce RhyDe (Robust hyperspectral Denoising), a powerful HSI denoiser, which implements explicit low-rank representation, promotes self-similarity, and, by using a form of collaborative sparsity, preserves rare pixels. The denoising and detection effectiveness of the proposed robust HSI denoiser is illustrated using semi-real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the past years, several compressive spectral imaging techniques were developed. With these techniques, an optically compressed version of the spectral datacube is captured. Consequently, the information about the object and targets is captured in a lower dimensional space. A question that rises is whether the reduction of the captured space affects the target detection performance. The answer to this question depends on the compressive spectral imaging technique employed. In most compressive spectral imaging techniques, the target detection performance is deteriorated. We show that our recently introduced technique, dubbed Compressive Sensing Miniature Ultra-Spectral Imaging (CSMUSI), yields similar target detection and false detection rates to that of conventional hyperspectral cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient water management in agriculture requires an accurate estimation of evapotranspiration (ET). There are available several balance energy surface models that provide a daily ET estimation (ETd) spatially and temporarily distributed for different crops over wide areas. These models need infrared thermal spectral band (gathered from remotely sensors) to estimate sensible heat flux from the surface temperature. However, this spectral band is not available for most current operational remote sensors. Even though the good results provided by machine learning (ML) methods in many different areas, few works have applied these approaches for forecasting distributed ETd on space and time when aforementioned information is missing. However, these methods do not exploit the land surface characteristics and the relationships among land covers producing estimation errors. In this work, we have developed and evaluated a methodology that provides spatial distributed estimates of ETd without thermal information by means of Convolutional Neural Networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Individual tree level inventory performed using high density multi-return airborne Light Detection and Ranging (LiDAR) systems provides both internal and external geometric details on individual tree crowns. Among them, the parameters such as, the stem location, and Diameter at Breast Height of the stem (DBH) are very relevant for accurate biomass, and forest growth estimation. However, methods that can accurately estimate these parameters along the vertical canopy are lacking in the state of the art. Thus, we propose a method to locate and model the stem by analyzing the empty volume that appears within the 3D high density LiDAR point cloud of a conifer, due to the stem. In a high LiDAR density data, the points most proximal to the stem location in the upper half of the crown are very likely due to laser reflections from the stem and/or the branch-stem junctions. By locating accurately these points, we can define the lattice of points representing branch-stem junctions and use it to model the empty volume associated to the stem location. We identify these points by using a state-of-the-art internal crown structure modelling technique that models individual conifer branches in a high density LiDAR data. Under the assumption that conifer stem can be closely modelled using a cone shape, we regression fit a geometric shape onto the lattice of branch-stem junction points. The parameters of the geometric shape are used to accurately estimate the diameter at breast height, and height of the tree. The experiments were performed on a set of hundred conifers consisting of trees from six dominant European conifer species, for which the height and the DBH were known. The results prove the method to be accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Building area calculation from LiDAR points is still a difficult task with no clear solution. Their different characteristics, such as shape or size, have made the process too complex to automate. However, several algorithms and techniques have been used in order to obtain an approximated hull. 3D-building reconstruction or urban planning are examples of important applications that benefit of accurate building footprint estimations. In this paper, we have carried out a study of accuracy in the estimation of the footprint of buildings from LiDAR points. The analysis focuses on the processing steps following the object recognition and classification, assuming that labeling of building points have been previously performed. Then, we perform an in-depth analysis of the influence of the point density over the accuracy of the building area estimation. In addition, a set of buildings with different size and shape were manually classified, in such a way that they can be used as benchmark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cadasters together with land registry form a core ingredient of any land administration system. Cadastral maps comprise of the extent, ownership and value of land which are essential for recording and updating land records. Traditional methods for cadastral surveying and mapping often prove to be labor, cost and time intensive: alternative approaches are thus being researched for creating such maps. With the advent of very high resolution (VHR) imagery, satellite remote sensing offers a tremendous opportunity for (semi)-automation of cadastral boundaries detection. In this paper, we explore the potential of object-based image analysis (OBIA) approach for this purpose by applying two segmentation methods, i.e. MRS (multi-resolution segmentation) and ESP (estimation of scale parameter) to identify visible cadastral boundaries. Results show that a balance between high percentage of completeness and correctness is hard to achieve: a low error of commission often comes with a high error of omission. However, we conclude that the resulting segments/land use polygons can potentially be used as a base for further aggregation into tenure polygons using participatory mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Crop maps are essential inputs for the agricultural planning done at various governmental and agribusinesses agencies. Remote sensing offers timely and costs efficient technologies to identify and map crop types over large areas. Among the plethora of classification methods, Support Vector Machine (SVM) and Random Forest (RF) are widely used because of their proven performance. In this work, we study the synergic use of both methods by introducing a random forest kernel (RFK) in an SVM classifier. A time series of multispectral WorldView-2 images acquired over Mali (West Africa) in 2014 was used to develop our case study. Ground truth containing five common crop classes (cotton, maize, millet, peanut, and sorghum) were collected at 45 farms and used to train and test the classifiers. An SVM with the standard Radial Basis Function (RBF) kernel, a RF, and an SVM-RFK were trained and tested over 10 random training and test subsets generated from the ground data. Results show that the newly proposed SVM-RFK classifier can compete with both RF and SVM-RBF. The overall accuracies based on the spectral bands only are of 83, 82 and 83% respectively. Adding vegetation indices to the analysis result in the classification accuracy of 82, 81 and 84% for SVM-RFK, RF, and SVM-RBF respectively. Overall, it can be observed that the newly tested RFK can compete with SVM-RBF and RF classifiers in terms of classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MODIS (MODerate resolution Imaging Spectroradiometer) daily surface reflectance data is distributed with one of the most complete quality ancillary data sets. Such amount of quality information is essential for automatically selecting the highest quality MODIS daily images, for example using geostatistical analysis of the image spatial pattern. However, the success of this automatic selection certainly could depend on the spectral information of each MODIS band. This work studies the influence of MODIS spectral bands on the automatic identification of high quality daily images by analyzing their variogram and aiming at the identification of the most suitable spectral band (or band combination) for the spatial characterization of a given geographical region. The analysis tests the influence of each of the reflectance bands of the 2009 MOD09GA Daily Surface Reflectance product and the first component of its Principal Component Analysis over an area of 32 000 km2 , Catalonia (northeast of the Iberian Peninsula). Specifically, the combination of quality data and the variogram analysis allows the detection of different anomalies by the correspondence between the variability among the pixels and the fitted variogram parameters: nugget, sill and range. The variogram analysis is reaffirmed as an extremely useful approach for the automatic selection of high quality images while highlighting the need of high computational techniques for such huge processing. Finally, it reveals that is crucial to select the appropriate spectral band in order to, not only optimize, but substantially improve the automatic selection of remote sensing images using geostatistical analysis based on variogram tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of ships from remote sensing data has become an essential task for maritime security. The variety of application scenarios includes piracy, illegal fishery, ocean dumping and ships carrying refugees. While techniques using data from SAR sensors for ship detection are widely common, there is only few literature discussing algorithms based on imagery of optical camera systems. A ship detection algorithm for optical pushbroom data has been developed. It takes advantage of the special detector assembly of most of those scanners, which allows apart from the detection of a ship also the calculation of its heading out of a single acquisition. The proposed algorithm for the detection of moving ships was developed with RapidEye imagery. It algorithm consists mainly of three steps: the creation of a land-watermask, the object extraction and the deeper examination of each single object. The latter step is built up by several spectral and geometric filters, making heavy use of the inter-channel displacement typical for pushbroom sensors with multiple CCD lines, finally yielding a set of ships and their direction of movement. The working principle of time-shifted pushbroom sensors and the developed algorithm is explained in detail. Furthermore, we present our first results and give an outlook to future improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interpretation of high-resolution satellite images has been so difficult that skilled interpreters must have checked the satellite images manually because of the following issues. One is the requirement of the high detection accuracy rate. The other is the variety of the target, taking ships for example, there are many kinds of ships, such as boat, cruise ship, cargo ship, aircraft carrier, and so on. Furthermore, there are similar appearance objects throughout the image; therefore, it is often difficult even for the skilled interpreters to distinguish what object the pixels really compose. In this paper, we explore the feasibility of object extraction leveraging deep learning with high-resolution satellite images, especially focusing on ship detection. We calculated the detection accuracy using the WorldView-2 images. First, we collected the training images labelled as “ship” and “not ship”. After preparing the training data, we defined the deep neural network model to judge whether ships are existing or not, and trained them with about 50,000 training images for each label. Subsequently, we scanned the evaluation image with different resolution windows and extracted the “ship” images. Experimental result shows the effectiveness of the deep learning based object detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a filtering framework based on a new statistical model for Single-Look Complex (SLC) Synthetic Aperture Radar (SAR) data, the Scaled Normal-Inverse Gaussian (SNIG). The real and imaginary parts of the SLC image are modeled as mixtures of SNIGs, and the clustering of the mixture components is conducted using a Stochastic Expectation-Maximization (SEM) algorithm. Model parameters are associated to each pixel according to its class, thus producing parametric images of the entire scene. A closed-form Maxmum A Posteriori (MAP) filter then delivers a de-speckled estimate of the image. The method is tested on RADARSAT-2 data, HV polarization, representing images of icebergs surrounded by ocean water off the coast of the Hopen Island (Svalbard archipelago). Post-processing, the iceberg Contrast-to-Noise Ratio (CNR) defined relative to the open water clutter is improved compared to Single-Look Intensity (SLI) image, increasing from a value of 11 to a value of 30 for one of the targets, and the Coefficient of Variation (CV) of the clutter is reduced to a fraction of 0.01 compared to the same reference. We conclude that the proposed method shows potential for improving iceberg detection in open water.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coastline detection in synthetic aperture radar (SAR) images is crucial in many application fields, from coastal erosion monitoring to navigation, from damage assessment to security planning for port facilities. The backscattering difference between land and sea is not always documented in SAR imagery, due to the severe speckle noise, especially in 1-look data with high spatial resolution, high sea state, or complex coastal environments. This paper presents an unsupervised, computationally efficient solution to extract the coastline acquired by only one single-polarization 1-look SAR image. Extensive tests on Spotlight COSMO-SkyMed images of complex coastal environments and objective assessment demonstrate the validity of the proposed procedure which is compared to state-of-the-art methods through visual results and with an objective evaluation of the distance between the detected and the true coastline provided by regional authorities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the use of a general purpose electromagnetic simulator, CST, to simulate realistic synthetic aperture radar (SAR) raw data of three-dimensional objects. Raw data is later focused in MATLAB using range-doppler algorithm. Within CST Microwave Studio a replica of TerraSAR-X chirp signal is incident upon a modeled Corner Reflector (CR) whose design and material properties are identical to that of the real one. Defining mesh and other appropriate settings reflected wave is measured at several distant points within a line parallel to the viewing direction. This is analogous to an array antenna and is synthesized to create a long aperture for SAR processing. The time domain solver in CST is based on the solution of differential form of Maxwells equations. Exported data from CST is arranged into a 2-d matrix of axis range and azimuth. Hilbert transform is applied to convert the real signal to complex data with phase information. Range compression, range cell migration correction (RCMC), and azimuth compression are applied in time domain to obtain the final SAR image. This simulation can provide valuable information to clarify which real world objects cause images suitable for high accuracy identification in the SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an interferometric synthetic aperture radar (InSAR) imaging method based on L1 regularization reconstruction model for SAR complex-image and raw data via complex approximated message passing (CAMP) with joint reconstruction model. As an iterative recovery algorithm for L1 regularization, CAMP can not only obtain the sparse estimation of considered scene as other regularization recovery algorithms, but also a non-sparse solution with preserved background information, thus can be used to InSAR processing. The contributions of the proposed method are as follows. On the one hand, as multiple SAR complex images are strongly correlated, single-channel independent reconstruction via Lq regularization cannot preserve the interferometric phase information, while the proposed mixed norm-based L1 regularization joint reconstruction model via CAMP algorithm can ensure the preservation of interferometric phase information among multiple channels. On the other hand, the interferogram reconstructed by the proposed CAMP-based InSAR imaging with joint reconstruction model can improve the performance of noise reduction efficiently compared with conventional matched filtering (MF) results. Experiments carried out on simulated and real data confirmed the feasibility of the L1 regularization joint reconstruction model via CAMP for InSAR processing with preserved interferometric phase information and better noise reduction performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subglacial lakes decouple the ice sheet from the underlying bedrock, thus facilitating the sliding of the ice masses towards the borders of the continents, consequently raising the sea level. This motivated increasing attention in the detection of subglacial lakes. So far, about 70% of the total number of subglacial lakes in Antarctica have been detected by analysing radargrams acquired by radar sounder (RS) instruments. Although the amount of radargrams is expected to drastically increase, from both airborne and possible future Earth observation RS missions, currently the main approach to the detection of subglacial lakes in radargrams is by visual interpretation. This approach is subjective and extremely time consuming, thus difficult to apply to a large amount of radargrams. In order to address the limitations of the visual interpretation and to assist glaciologists in better understanding the relationship between the subglacial environment and the climate system, in this paper, we propose a technique for the automatic detection of subglacial lakes. The main contribution of the proposed technique is the extraction of features for discriminating between lake and non-lake basal interfaces. In particular, we propose the extraction of features that locally capture the topography of the basal interface, the shape and the correlation of the basal waveforms. Then, the extracted features are given as input to a supervised binary classifier based on Support Vector Machine to perform the automatic subglacial lake detection. The effectiveness of the proposed method is proven both quantitatively and qualitatively by applying it to a large dataset acquired in East Antarctica by the MultiChannel Coherent Radar Depth Sounder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The amount of radar sounder data, which are used to analyze the subsurface of icy environments (e.g., Poles of Earth and Mars), is dramatically increasing from both airborne campaigns at the ice sheets and satellite missions on other planetary bodies. However, the main approach to the investigation of such data is by visual interpretation, which is subjective and time consuming. Moreover, the few available automatic techniques have been developed for analyzing highly reflective subsurface targets, e.g., ice layers, basal interface. Besides the high reflective targets, glaciologists have also shown great interest in the analysis of non-reflective targets, such as the echo-free zone in ice sheets, and the reflective free zone in the subsurface of the South Pole of Mars. However, in the literature, there is no dedicated automatic technique for the analysis of non-reflective targets. To address this limitation, we propose an automatic classification technique for the identification of non-reflective targets in radar sounder data. The method is made up of two steps, i.e., i) feature extraction, which is the core of the method, and ii) automatic classification of subsurface targets. We initially prove that the commonly employed features for the analysis of the radar signal (e.g., statistical and texture based features) are ineffective for the identification of non-reflective targets. Thus, for feature extraction, we propose to exploit structural information based on the morphological closing profile. We show the effectiveness of such features in discriminating of non-reflective target from the other ice subsurface targets. In the second step, a random forest classifier is used to perform the automatic classification. Our experimental results, conducted using two data sets from Central Antarctica and South Pole of Mars, point out the effectiveness of the proposed technique for the accurate identification of non-reflective targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the implementation of a procedure to adapt an Asymmetric Wiener Filtering (AWF) methodology aimed to detect and discard ghost signal due to azimuth ambiguities in SAR images to the case for X-band Cosmo Sky Med (CSK) images in the framework of SEASAFE (Slick Emissions And Ship Automatic Features Extraction) project, developed at the Department of Science and Technology Innovation of the University of Piemonte Orientale, Alessandria, Italy. SAR is a useful tool to daily and nightly monitoring of the sea surface in all weather conditions. SEASAFE project is a software platform developed in IDL language able to process data in C- Land X-band SAR images with enhanced algorithm modules for land masking, sea pollution (oil spills) and ship detection; wind and wave evaluation are also available. In this contest, the need to individuate and discard false alarms is a critical requirement. The azimuth ambiguity is one of the main causes that generate false alarm in the ship detection procedure. Many methods to face with this problem were proposed and presented in recent literature. After a review of different approach to this problem, we describe the procedure to adapt the AWF approach presented in [1,2] to the case of X-band CSK images by implementing a selective blocks approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microbolometer belongs to the group of thermal detectors and consist of temperature sensitive resistor which is exposed to measured radiation flux. Bolometer array employs a pixel structure prepared in silicon technology. The detecting area is defined by a size of thin membrane, usually made of amorphous silicon (a-Si) or vanadium oxide (VOx). FPAs are made of a multitude of detector elements (for example 384 × 288 ), where each individual detector has different sensitivity and offset due to detector-to-detector spread in the FPA fabrication process, and additionally can change with sensor operating temperature, biasing voltage variation or temperature of the observed scene. The difference in sensitivity and offset among detectors (which is called non-uniformity) additionally with its high sensitivity, produces fixed pattern noise (FPN) on produced image. Fixed pattern noise degrades parameters of infrared cameras like sensitivity or NETD. Additionally it degrades image quality, radiometric accuracy and temperature resolution. In order to objectively compare the two infrared cameras ones must measure and compare their parameters on a laboratory test stand. One of the basic parameters for the evaluation of a designed camera is NETD. In order to examine the NETD, parameters such as sensitivity and pixels noise must be measured. To do so, ones should register the output signal from the camera in response to the radiation of black bodies at two different temperatures. The article presets an application and measuring stand for determining the parameters of microbolometers camera. Prepared measurements were compared with the result of the measurements in the Institute of Optoelectronics, MUT on a METS test stand by CI SYSTEM. This test stand consists of IR collimator, IR standard source, rotating wheel with test patterns, a computer with a video grabber card and specialized software. The parameters of thermals cameras were measure according to norms and method described in literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture interferometric radiometers (SAIR) has been introduced to threat detection for high spatial resolution and no harm to human body. Usually, the SAIR security instrument is about 3 meters away from human body, which means that the SAIR works at near-field and the relationship between the visibility and the brightness temperature is no longer Fourier Transform. The contours and details of prohibited items are blurry in rebuild image by the traditional inversion method, such as Moore-Penrose method and Tikhonov regularization, so it is difficult to identify prohibited items. In this study, a regularization model based on gradient L1 norm minimization is proposed. In image processing, the contour can be regarded as a marker line where the brightness temperature changes sharply along the vertical direction. So, the gradient filed of the brightness temperature map is appropriate to quantify the contours. And the L1 norm minimization model is able to guarantee the inversion accuracy and enhance the contour. Simulation for SAIR is performed to validate the contour enhancement imaging method. A complex scene consisting of many small regions with different shape and brightness temperature value corresponding to different prohibited items is created. The reconstructed image by proposed method is compared with the results by Moore-Penrose method and Tikhonov regularization. The proposed method shows better reconstructed image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Post-launch vicarious calibration method, as an important post launch method, not only can be used to evaluate the onboard calibrators but also can be allowed for a traceable knowledge of the absolute accuracy, although it has the drawbacks of low frequency data collections due expensive on personal and cost. To overcome the problems, CEOS Working Group on Calibration and Validation (WGCV) Infrared Visible Optical Sensors (IVOS) subgroup has proposed an Automated Radiative Calibration Network (RadCalNet) project. Baotou site is one of the four demonstration sites of RadCalNet. The superiority characteristics of Baotou site is the combination of various natural scenes and artificial targets. In each artificial target and desert, an automated spectrum measurement instrument is developed to obtain the surface reflected radiance spectra every 2 minutes with a spectrum resolution of 2nm. The aerosol optical thickness and column water vapour content are measured by an automatic sun photometer. To meet the requirement of RadCalNet, a surface reflectance spectrum retrieval method is used to generate the standard input files, with the support of surface and atmospheric measurements. Then the top of atmospheric reflectance spectra are derived from the input files. The results of the demonstration satellites, including Landsat 8, Sentinal-2A, show that there is a good agreement between observed and calculated results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this work is to evaluate the performances of three Bayesian networks widely used for supervised image classification. The developed structures are constructed due to Kruskal algorithm which allows the determination of the maximum weight spanning tree by using the mutual information between the attributes. We started by the Bayesian naïve classifier (BNC), which assumes that there is no dependency, between the attributes to classify. In order to relax this strong assumption, we tested the tree augmented naïve Bayes classifier (TANC) where each feature has at most one variable as parent, and the forest augmented naïve Bayes classifier (FANC) where each attribute forms an arbitrary graph rather than just a tree. These classifiers are evaluated using a multispectral image and hyperspectral image in order to analyze the structure classifier complexity according to the number of attributes (04 and 10 spectral bands for the two images respectively). Obtained results are compared with state-of-the art competitor, namely, the SVM classifier. Classified images by TANC and FANC achieved higher accuracies than other classifiers including SVM. It is concluded that the choice of attributes dependencies significantly contributes to the discrimination of subjects on the ground. Thus, Bayesian networks appear as powerful tool for multispectral and hyperspectral image classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarimetric synthetic aperture radar (PolSAR) obtains polarimetric scattering of targets. The scattering properties are usually considered as invariant in azimuth. In some new SAR mode, such as wide-angle SAR and circular SAR (CSAR), targets are illuminated for longer time and look angle changes a lot. Moreover some targets have different physical shape in different look angle. Thus scattering properties can no longer be considered as invariant in azimuth. Variations across azimuth should be considered as useful information and are important parts of targets’ scattering properties. In this paper, polarimetric data are cut into subapertures in order to achieve scattering properties in different look angle. Target vector and coherency matrix are de- fined for multi-aperture situation. Polarimetric entropy for multi-aperture situation is then defined and named with multi-aperture poalrimetric entropy(MAPE). MAPE is calculated based on eigenvalue of multi-aperture coherency matrix. MAPE describes variations of scattering properties across subapertures. When MAPE is low, scattering properties change a lot across subapertures, which refers to anisotropic targets. When MAPE is high, there are few variations across subapertures, which refers to isotropic targets. Thus anisotropic targets and isotropic targets can be identified by MAPE. The effectiveness of MAPE is demonstrated on polarimetric CSAR(Pol-CSAR) data, acquired by the Institute of Electronics airborne CSAR system at P-band.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral Images are now used in the field of agriculture, cosmetics, and space exploring. Behind this fact, there is a result of efforts to contrive miniaturization and decrease in costs. This paper describes low-cost and small Hyperspectral Camera (HSC) under development and a method of utilizing it. Real Time Detection System for MDA is that government agencies put those cameras in small satellites and use them for MDA (Maritime Domain Awareness). We assume early detection of unidentified floating objects to find out disguised fishing ships and submarines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LIDAR (light distance and ranging) systems use sensors to detect reflected signals. The performance of the sensors significantly affects the specification of the LIDAR system. Especially, the number and size of the sensors determine the FOV (field of view) and resolution of the system, regardless of which sensors are used. The resolution of an array-type sensor normally depends on the number of pixels in the array. In this type of sensor, there are several limitations to increase the number of pixels in an array for higher resolution, specifically complexity, cost, and size limitations. Another type of sensors uses multiple pairs of transmitter and receiver channels. Each channel detects different points with the corresponding directions indicated by the laser points of each channel. In this case, in order to increase the resolution, it is required to increase the number of channels, resulting in bigger sensor head size and deteriorated reliability due to heavy rotating head module containing all the pairs. In this paper, we present a method to overcome these limitations and improve the performance of the LIDAR system. ETRI developed a type of scanning LIDAR system called a STUD (static unitary detector) LIDAR system. It was developed to solve the problems associated with the aforementioned sensors. The STUD LIDAR system can use a variety of sensors without any limitations on the size or number of sensors, unlike other LIDAR systems. Since it provides optimal performance in terms of range and resolution, the detailed analysis was conducted in the STUD LIDAR system by applying different sensor type to have improved sensing performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The key to successful spectral un-mixing is indicating number of endmembers and their corresponding spectral signatures. Nevertheless, correctly estimate the number of end members without a priori knowledge is a very hard task because pixels in a hyperspectral image are always contain a mixture of the several reflected spectra. Currently, Noise Whitened Harsanyi, Farrand, and Chang (NWHFC) and hyperspectral signal subspace identification by minimum error (HySime) are two well-known methods for estimating the number of endmembers. However, in practice, because NWHFC requires fixing the false-alarm probability and HySime needs estimate noise of each spectral band, these two methods may not only ignore small objects but also can’t identify endmembers. In this paper, assuming endmembers in a hyperspectral image can be modeled by convex geometry. We propose a threestage process to estimate the number of endmembers. At the first stage, principal component (PC) is used to transform original image to low-dimensional components for speeding up algorithm execution. At second stage, successive volume maximization (SVMAX) is used to obtain vertex using convex properties. At the third stage, spectral angle mapper (SAM) is used to compute similarity measures among vertex, and minimum SAM value represents vertex separation. Repeat the second and third stages by increasing transformed component dimensions until reach a predefined criteria. The number of endmembers of the image is the vertex with maximum of the vertex separations Finally, the proposed method is applied to synthetic and real AVIRIS and HYDICE hyperspectral data sets for estimating the number of endmembers. The results demonstrate that the proposed method can be used to estimate more reasonable and precise number of endmembers than the two published methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of archaeological sites documenting is of great importance for cultural heritage preserving and investigating. The progress in developing new techniques and systems for data acquisition and processing creates an excellent basis for achieving a new quality of archaeological sites documenting and visualization. archaeological data has some specific features which have to be taken into account when acquiring, processing and managing. First of all, it is a needed to gather as full as possible information about findings providing no loss of information and no damage to artifacts. Remote sensing technologies are the most adequate and powerful means which satisfy this requirement. An approach to archaeological data acquiring and fusion based on remote sensing is proposed. It combines a set of photogrammetric techniques for obtaining geometrical and visual information at different scales and detailing and a pipeline for archaeological data documenting, structuring, fusion, and analysis. The proposed approach is applied for documenting of Bosporus archaeological expedition of Russian State Historical Museum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.