Recent advances in multiframe blind deconvolution of ground based telescopes are presented. The paper focuses on practical aspects of the software and algorithm. (1) A computer simulation that models atmospheric turbulence, noise and other aspects, for testing and evaluation of the deconvolution system are explained. (2) A post-processing algorithm that corrects for glint due to specular and other bright reflections is presented. This glint correction is automated by a spatially adaptive scheme that calculates statistics of brightness levels. (3) Efforts to realize computational speed, wherein processing happens on-the-fly at streaming frame rates are underway. The massively parallel processing of graphical processing units (GPUs) and the Compute Unified Device Architecture (CUDA) language are used.
Recent advances are presented for multiframe blind deconvolution (MFBD) of ground based telescope imagery for low-earth orbit objects. The iterative algorithm uses the maximum likelihood estimation optimization criterion. It is modeled from a previous well-known algorithm called the expectation-maximization (EM) algorithm. New renditions of the algorithm simplify the phase reconstruction, thereby reducing the complexity of the original EM algorithm. Examples are shown, with and without adaptive optics (AO). The system is being designed for on-the-fly streaming video operation.
This paper presents a Lobster Eye (LE) X-ray telescope developed for the Water Recovery X-ray Rocket (WRX-R) experiment. The primary payload of the rocket experiment is a soft X-ray spectroscope developed by the Pennsylvania State University (PSU), USA. The Czech team participates by hard LE X-ray telescope as a secondary payload. The astrophysical objective of the rocket experiment is the Vela Supernova of size about 8deg x 8deg. In the center of the nebula is a neutron star with a strong magnetic field, roughly the mass of the Sun and a diameter of about 20 kilometers forming the Vela pulsar.
The primary objective of WRX-R is the spectral measurement of the outer part of the nebula in soft X-ray and FOV of 3.25deg x 3.25deg. The secondary objective (hard LE X-ray telescope) is the Vela neutron star observation. The hard LE telescope consists of two X-ray telescopes with the Timepix detector. First telescope uses 2D LE Schmidt optics (2DLE- REX) with focal length over 1m and 4 Timepix detectors (2x2 matrix). The telescope FOV is 1.5deg x 1.5deg with spectral range from 3keV to 60keV. The second telescope uses 1D LE Schmidt optics (1D-LE-REX) with focal length of 25 cm and one Timepix detector. The telescope is made as a wide field with FOV 4.5deg x 3.5deg and spectral range from 3keV to 40keV. The rocket experiment serves as a technology demonstration mission for the payloads. The LE X-ray telescopes can be in the future used as all‐sky monitor/surveyor. The astrophysical observation can cover the hard X-ray observation of astrophysical sources in time-domain, the GRBs surveying or the exploration of the gravitational wave sources.
Detection of improvised explosive devices (IEDs) presents a significant challenge for stand-off sensors. IEDs can be constructed from a wide variety of materials and take many different forms. Forward-looking and side-looking sensor systems attempt to detect targets at significant stand-off but typically observe a potential threat area over a wide range of aspect before detection and classification is attempted. Furthermore, in practical detection scenarios IEDs are obscured preventing unhindered line-of-sight interrogation. Both of these characteristics can lead to target signatures which become incoherent over time. In this paper we investigate an advanced imaging technique designed to improve image quality of obscured targets utilizing a wide-band RF array. Specifically, we consider buried targets and obscured roadside targets. Results are presented in terms of improved signal to clutter ratio of image reconstructions based on simulated and experimental GPR collections.
KEYWORDS: Point spread functions, Radar, Target detection, Radio propagation, Electromagnetism, Sensors, Doppler effect, General packet radio service, L band, Independent component analysis
The operational constraints associated with a forward-looking ground-penetrating radar (GPR) limit the ability of the
radar to resolve targets in the dimension orthogonal to the ground. As such, detection performance of buried targets is
greatly inhibited by the relatively large response due to surface clutter. The response of buried targets differs from surface
targets due to the interaction at the boundary and propagation through the ground media. The electromagnetic properties of
the media, interrogation frequency, depth of buried target, and location of the target with respect to the the sensing platform
all contribute to the shape, position, and magnitude of the point spread function (PSF). The standard FLGPR scenario
produces a wide-band data set collected over a fixed set of observation points. By observing the shape, position, and
amplitude behavior of the PSF as a function of frequency and sensor position (time), energy resulting from surface clutter
can be separated from energy resulting from buried targets. There are many possible ways beyond conventional image
resolution that might be exploited to improve distinction between buried targets and surface clutter. This investigation
exploits the frequency dependence of buried targets compared to surface targets using a set of sub-banded images.
Buried explosives hazards are one of the many deadly threats facing our Soldiers, thus the U.S. Army is interested in the detection and neutralization of these hazards. One method of buried target detection uses forward-looking ground-penetrating radar (FLGPR), and it has grown in popularity due to its ability to detect buried targets at a standoff distance. FLGPR approaches often use machine learning techniques to improve the accuracy of detection. We investigate an approach to explosive hazard detection that exploits multi-instance features to discriminate between hazardous and non-hazardous returns in FLGPR data. One challenge this problem presents is a high number of clutter and non-target objects relative to the number of targets present. Our approach learns a bag of words model of the multi-instance signatures of potential targets and confuser objects in order to classify alarms as either targets or false alarms. We demonstrate our method on test data collected at a U.S. Army test site.
KEYWORDS: Point spread functions, L band, Sensors, Image processing, Target detection, Image acquisition, General packet radio service, Near field, Radar, Antennas
A forward-looking and -moving ground-penetrating radar (GPR) acquires data that can be used for buried target detection. As the platform moves forward the sensor can acquire and form a sequence of images for a common spatial region. Due to the near-field nature of relevant collection scenarios, the point-spread function (PSF) varies significantly as a function of the spatial position, both within the scene and relative to the sensor platform. This variability of the PSF causes computational difficulties for matched-filter and related processing of the full video sequence. One approach to circumventing this difficulty is to coherently or incoherently integrate the video frames, and then perform detection processing on the integrated image. Here, averaging over the space- and motion-variant nature of the PSFs for each frame causes the PSF for the integrated image to appear less space-variant. Another alternative—and the one we investigate in this paper—is to transform each image from the conventional (range, cross-range) coordinate system to a (range, sine-angle) coordinate system in which the PSF is approximated as spatially invariant. The advantage of the (range, sine-angle) coordinate space is that methods that require space-invariance can be directly applied. Here we develop a multi-anodization approach, which results in a significantly improved image. To evaluate the relative advantages of this procedure, we will empirically measure the integrated side-lobe ratio, which represents the reduction in the side-lobes before and after applying the algorithm.
Explosive hazards are one of the most deadly threats in modern conflicts. The U.S. Army is interested in a reliable way to detect these hazards at range. A promising way of accomplishing this task is using a forward-looking ground-penetrating radar (FLGPR) system. Recently, the Army has been testing a system that utilizes both L-band and X-band radar arrays on a vehicle mounted platform. Using data from this system, we sought to improve the performance of a constant false-alarm-rate (CFAR) prescreener through the use of a deep belief network (DBN). DBNs have also been shown to perform exceptionally well at generalized anomaly detection. They combine unsupervised pre-training with supervised fine-tuning to generate low-dimensional representations of high-dimensional input data. We seek to take advantage of these two properties by training a DBN on the features of the CFAR prescreener’s false alarms (FAs) and then use that DBN to separate FAs from true positives. Our analysis shows that this method improves the detection statistics significantly. By training the DBN on a combination of image features, we were able to significantly increase the probability of detection while maintaining a nominal number of false alarms per square meter. Our research shows that DBNs are a good candidate for improving detection rates in FLGPR systems.
Explosive hazard detection and remediation is a pertinent area of interest for the U.S. Army. There are many types of detection methods that the Army has or is currently investigating, including ground-penetrating radar, thermal and visible spectrum cameras, acoustic arrays, laser vibrometers, etc. Since standoff range is an important characteristic for sensor performance, forward-looking ground-penetrating radar has been investigated for some time. Recently, the Army has begun testing a forward-looking system that combines L-band and X-band radar arrays. Our work focuses on developing imaging and detection methods for this sensor-fused system. In this paper, we investigate approaches that fuse L-band radar and X-band radar for explosive hazard detection and false alarm rejection. We use multiple kernel learning with support vector machines as the classification method and histogram of gradients (HOG) and local statistics as the main feature descriptors. We also perform preliminary testing on a context aware approach for detection. Results on government furnished data show that our false alarm rejection method improves area-under-ROC by up to 158%.
There is a strong need for the ability to terrestrially image resident space objects (RSOs) and other low earth orbit (LEO)
objects for Space Situational Awareness (SSA) applications. The Synthetic Aperture Imaging Polarimeter (SAIP)
investigates an alternative means for imaging an object in LEO illuminated by laser radiation. A prototype array
consisting of 36 division of amplitude polarimeters was built and tested. The design, assembly procedure, calibration
data and test results are presented. All 36 polarimeters were calibrated to a high degree of accuracy. Pupil plane
imaging tests were performed in by using cross-correlation image reconstruction algorithm to determine the prototype
functionality.
There is a significant fixed aberration in some commercial off-the-shelf liquid crystal spatial light modulators (SLMs). In a recent experiment we conducted to simulate the effects of atmospheric turbulence and correction schemes in a laboratory setting using such an SLM, this aberration was too strong to neglect. We then tried to characterize and correct the observed aberration. Our method of characterizing the device uses a measurement of the far-field intensity pattern caused by the aberration and processing based on a parameterized version of the phase retrieval algorithm. This approach uses simple and widely available hardware and does not require expensive aberration sensing equipment. The phase aberrations were characterized and compared with the manufacturer's published measurements for a similar device, with excellent agreement. To test the quality of our aberration estimate, a correction phase was computed and applied to the SLM, and the resulting far-field patterns were measured and compared to the theoretical patterns with excellent results. Experiments show that when the correction is applied to the SLM, nearly diffraction-limited far-field intensity patterns are observed.
The task of delivering sufficient level of airborne laser energy to ground based targets is of high interest. To overcome the degradation in beam quality induced by atmospheric turbulence, it is necessary to measure and compensate for the phase distortions in the wavefront. Since, in general, there will not be a cooperative beacon present, an artificial laser beacon is used for this purpose. In many cases of practical interest, beacons created by scattering light from a surface in the scene are anisoplanatic, and as a result provide poor beam compensation results when conventional adaptive optics systems are used. In this paper we present three approaches for beacon creation in a down-looking scenario. In the first approach we probe whole volume of the atmosphere between transmitter and the target. In this case the beacon is created by scattering an initially focused beam from the surface of the target. The second approach describes generation of an uncompensated Rayleigh beacon at some intermediate distance between the transmitter and the target. This method allows compensation for only part of the atmospheric path, which in some cases provides sufficient performance. Lastly, we present a novel technique of "bootstrap" beacon generation that allows achieving dynamic wavefront compensation. In this approach a series of compensated beacons is created along the optical path, with the goal of providing a physically smaller beacon at the target plane. The performance of these techniques is evaluated by using the average Strehl ratio and the radially averaged intensity of the beam falling on the target plane. Simulation results show that under most turbulence conditions of practical interest the novel "bootstrap" technique provides better power in the bucket in comparison with the other two techniques.
The purpose of space surveillance is to classify and, if possible, assess the mission and performance capabilities of space objects. Historically, imaging techniques have obtained useful results. However, with the advances achieved in microtechnologies, small but highly functional satellites (largest dimension <1 m) are emerging that are hard to identify by imaging with large ground-based telescopes. The concept of using nonimaging measurements to obtain information is relatively new. In this paper, we present and discuss the performance of two techniques for classifying satellites based on spectral measurements. A distance-based classifier and a neural-net-based classifier are used to process both calibrated spectral data and features computed using these data. Neural networks are found to give better recognition results than the distance-based classifier, and once trained, this method is also faster. The average error rates for the distance-based method are greater than 30% when the inputs are the calibrated spectra, and 70% when using the central moments and the K-nearest-neighbors method. The best results are obtained for the neural network design, with the lowest class error rate at 0% for some satellites, the highest error rate at 30%, and an average error rate at 16%.
KEYWORDS: Point spread functions, Expectation maximization algorithms, Signal to noise ratio, Adaptive optics, Reconstruction algorithms, Deconvolution, Image restoration, Mathematical modeling, Inverse problems, Monte Carlo methods
Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics (AO) systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view (FOV) corresponding approximately to the isoplanatic angle θ0. For field angles larger than θ0, the point spread function (PSF) gradually degrades as the field angle increases. In this paper, we present a technique to predict the PSF as function of the field angle. The predicted PSF is compared to the simulated PSF and the mean square (MS) error between the predicted and the simulated PSF never exceeds 2.7%. Simulated anisoplanatic intensity images of a star field are reconstructed by mean of a block-processing method using the predicted PSF. Two methods for image recovery are used: the Tikhonov regularization and the expectation maximization (EM) algorithm. The deconvolution results using the space-varying predicted PSF are compared to deconvolution results using the space-invariant on-axis PSF. The reconstruction technique using the predicted PSF shows an improvement of the MS error between the reconstructed image and the object of 7.2% to 84.8% compared to the on-axis PSF reconstruction.
In this paper we demonstrate that the amplitude and phase of the
principle eigenfunction for the pupil-plane mutual intensity
can be used to specify a beam that concentrates its intensity
at the brightest point on the target in a target-in-the-loop system with a spatially incoherent reflected field. In addition, we discuss two methods for beam control: a method for which the beam amplitude and phase are determined as the principle eigenfunction for the measured (but incomplete) pupil-plane mutual intensity; and a method for which the beam amplitude and phase are determined to maximize a window-based image-plane sharpness measure. We demonstrate that the two methods are similar, and that both result in beams that correspond to the principle eigenfunction for an apodized mutual intensity function.
There is strong interest in developing adaptive optics solutions for extreme conditions, such as laser beam projection over long, horizontal paths. In most realistic operational scenarios there is no suitable beacon readily available for tracking and wave front sensing. In these situations it is necessary to create a bacon artificially. In this paper we explore two strategies for creating a beacon: (1) scattering an initially focused beam from a surface accomplished compensation for part of the path. In many cases of practical interest, beacons created by scattering of the light from a surface in the scene results in beacons which are anisoplanatic, and hence provide poor beam compensation results. Partial path compensation based on a Rayleigh beacon provides comparable performance in some cases.
In this paper we consider the optimal coherence for beam propagation through random media. First, we demonstrate that a beam that maximizes the average receiver intensity is fully coherent, and that the upper bounds on received intensity are nearly attained by a beam that is focused for clear air. Second, we demonstrate that a beam that maximizes the scintillation index (along with other criteria that trade-off the mean and standard deviation for the received intensity) is, in general, partially coherent. We conclude with an example in which modal intensities are optimized for a beam that is constructed from Hermite-Gaussian modes.
In this paper, performance bounds are computed for the estimation of
the degree of polarization for reflected fields with active laser
illumination. Bounds are compared for three situations: i)
measurement and processing of the complex amplitude of two orthogonal field components; ii) measurement and processing of the intensity of two orthogonal field components; and iii) measurement and processing of the total intensity of the field.
Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics (AO)
systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view (FOV)
corresponding approximately to the isoplanatic angle θ0. For field angles larger than θ0, the point spread
function (PSF) gradually degrades as the field angle increases. Knowledge of the space-varying PSF is essential
for image reconstruction. In this paper, we present a technique to predict the PSF as function of the field angle.
The results are validated by mean of simulations. The predicted PSF is compared to the simulated PSF and we
obtain a mean square (MS) error of 4.3% between the predicted and the simulated PSF in the worse case.
The proliferation of small, lightweight, 'micro-' and 'nanosatellite' (largest dimension < 1m ) has presented new challenges to the space surveillance community. The small size of these satellites makes them unresolvable by ground-based imaging systems. The core concept of using Non-Imaging Measurements (NIM) to gather information about these objects comes from the fact that after reflection on a satellite surface, the reflected light contains information about the surface materials of the satellite. This approach of using NIM for satellite evaluation is relatively new. In this paper, we discuss the accuracy of using these spectral measurements to match an unknown spectrum to a database containing known spectra. Several approaches have been developed and are presented in this paper. The first method is an artificial neural network designed to process central moments of real measured spectra. This spectrum database is the Spica database
provided by the Maui Surveillance Site (MSSS), Hawaii USA and
consists in spectra from more than 100 different satellites. The
average rate of correct identification is 84%. The second approach is based on the ability of spectral signal processing to estimate relative abundances of materials from the measurement of a single spectrum; this method is called spectral unmixing. Material spectra were provided by the NASA Johnson Space Center (JSC) to create synthetic spectra. An approach based on the Expectation Maximization (EM) algorithm was used to estimate relative abundances and presence of materials in a synthetic spectrum. The results for material identification and abundance estimation are presented as a function of signal-to-noise ratio. For the EM method, the overall correct estimation rate is 95.1% and the average error on the fractional composition estimation is 19.7%.
We describe a new approach to controlling the deformable mirror in beam projection systems operating in conditions of strong scintillation. Under the conditions of interest, two-way propagation is required to create the light used for wavefront sensing. In this situation, the beacon can subtend an angle that is many times larger than the isoplanatic angle. Our approach uses a nonlinear optimization-based technique to determine the deformable mirror (DM) figure which optimizes an image sharpness metric. This correction is applied to the outgoing laser beam with the goal of concentrating most of the laser's power on a small area of the target. The optimization algorithm chosen for this purpose is the simultaneous perturbation stochastic approximation (SPSA). Our results show that using phase-only conjugation with
nonlinear optimization of an image sharpness metric can provide an improvement in encircled energy performance compared to phase-only conjugation with only linear Hartman wavefront sensor processing.
An application of a phase retrieval method for correcting strong scintillation effects of laser beam projection through turbulence in multimirror adaptive optics system is reported. In this approach, two deformable mirrors are used in the system and the phase applied to the deformable mirrors is obtained based on the phase retrieval method. An extended random beacon is used to provide input signals for the wave front sensor. Computer simulations are used to evaluate the performance of the system in the presence of strong turbulence. Our results show that this phase retrieval method with two deformable mirrors yields better performance compared with the conventional adaptive systems.
The spatial resolution of under sampled or diffraction limited images can be improved through micro scanning and super-resolution technologies. The objective of this Air Force Phase Ii Small Business Innovative Research was to develop and demonstrate real-time or near real-time micro scanning and super-resolution algorithms using passive millimeter wave imagery. A new super-resolution algorithm based on expectation-maximization was developed which is insensitive to missing data, incorporates both positivity and smoothness constraints, and rapidly converges in 15 to 20 iterations. Analysis using measured data shows that the practical resolution gain that can be expected using this algorithm is less than a facto of two. A new micro scanning algorithm was developed and demonstrated that can reliably detect less than one fifth of an IFOV displacement using field test data. The iteration of the super-resolution and microscanning algorithms was demonstrated and resolution gains of four to six times can be achieved if the image is under sampled by a factor of two or three. Consequently, it makes sense to use a wide under sampled FOV sensor in which high spatial resolution can be obtained as desired using micro scanning and super-resolution techniques.
In this paper we examine the accuracy with which Zernike coefficients for turbulence-induced wavefront aberrations can be estimated from conventional and Hartmann-sensor images. The performance limit for the estimation of the first 30 Zernike coefficients with a conventional image is shown to be significantly better than the performance limit of an 8 X 8 Hartmann sensor array.
In this paper we report experimental results for a new technique for estimating aberrations which extends the strength of an aberration which may be sensed using Hartmann sensor technology by means of an algorithm which processes both a Hartmann sensor image and a conventional image formed with the same aberration. We find that the theory and the experiment match well within the experimental error, and that very strong defocus aberrations can be accurately sensed with this technique.
Hartmann sensors and shearing interferometers have dynamic range limitations which bound the strength of the aberration which can be sensed. The largest aberration which can be reliably sensed in a Hartmann sensor must have a local gradient small enough so that the spot formed by each lenslet is confined to the area behind the lenslet -- if the local gradient is larger, spots appear under nearby lenslets, causing a form of cross talk between the wave front sensor channels. Similarly, the effectiveness of shearing interferometer-based aberration sensing can be reduced by strong phase gradients which cause unresolved 2π phase jumps in the measured fringe pattern. In this paper we describe a wave front reconstruction algorithm which processes the whole image measured by either a Hartmann sensor or a shearing interferometer, and a conventional image formed using the incident aberration. We show that this algorithm can accurately estimate aberrations for cases where the aberration is strong enough to cause many of the images formed by individual Hartmann sensor lenslets to fall outside the local region of the Hartmann sensor detector plane defined by the edges of a lenslet.
We present preliminary results from a comparison of image estimation and recovery algorithms developed for use with advanced telescope instrumentation and adaptive optics systems. Our study will quantitatively compare the potential of these techniques to boost the resolution of imagery obtained with undersampled or low-bandwidth adaptive optics; example applications are optical observations with IR- optimized AO, AO observations in server turbulence, and AO observations with dim guidestars. We will compare the algorithms in terms of morphological and relative radiometric accuracy as well as computational efficiency. Here, we present qualitative comments on image results for two levels each of seeing, object brightness, and AO compensation/wavefront sensing.
To recover spatial information from bandlimited images using maximum likelihood (ML) and constrained least squares techniques it is necessary that the image plane be oversampled. Specifically, oversampling allows the blur component induced by spatial integration of the signal over the finite size of the detector element(s) to be reduced. However, if oversampling in the image plane is achieved with a fixed array, the field of view (FOV) is proportionately reduced. Conversely, if the FOV is to be preserved then proportionately more samples are required implying the requirement for additional detector elements. An effective solution to obtaining oversampling in the image plane and subsequently preserving the FOV, is to use either controlled or uncontrolled microscanning. There are a number of methods to achieve microscanning including translation of the sensor array in the image plane and exploitation of airframe jitter. Three unique sixteen-times-Nyquist oversampled passive millimeter wave (PMMW) images; a point source, an extended source, and an M48 tank were carefully obtained. Both ML and constrained least squares (CLS) algorithms were used for restoration of spatial information in the images. Restoration of known extended source object functions (contained in the extended source image) resulted in resolution gains of 1.47 and 3.43 using the CLS and ML methods respectively, as measured by increase in effective aperture.
The multi-frame blind deconvolution algorithm is considered for processing astronomical speckle images when only a few frames of data are collected. It has been noted that when the speckle data contains even moderate amounts of shot noise the algorithm often converges to the trivial point solution. In this paper we consider a `penalized' blind deconvolution algorithm in which the penalty function is based on the Knox-Thompson algorithm.
KEYWORDS: Speckle, Critical dimension metrology, Speckle pattern, Image restoration, Signal to noise ratio, Information operations, Backscatter, Image acquisition, Digital image correlation, Digital imaging
Correlations of coherent backscatter intensities have been used to form Fourier spectra of coherently illuminated objects. Both second and some fourth-order correlations have been studied. We add to this by describing a new method using fourth order field correlations for imaging. If one knows the actual object speckle field, or certain sheared versions of it, then it is possible through fourth-order correlations to obtain the incoherent brightness function of the object. The technique will allow simultaneous averaging-out of both field phase distortions, certain field measurement noises, and laser speckle.
Deconvolution from wavefront sensing (or self-referenced speckle holography) has previously been proposed as a post-detection processing technique for correcting turbulence-induced wavefront phase-errors in incoherent imaging systems. In this paper, a new methodology is considered for processing the image and wavefront-sensor data in which the method of maximum-likelihood estimation is used to simultaneously estimate the object intensity and phase errors directly from the detected images and wavefront-sensor data. This technique is demonstrated to work well in a situation for which the wavefront sensor's lenslet diameters are such that their images are not simply spots of light translated according to the local slope of the phase errors, but are instead an array of small, interfering speckle patterns.
A laser radar using an array of heterodyne detectors offers the possibility of fine resolution angle-angle imaging. The heterodyne measurements, however, are subject to phase errors due to atmospheric turbulence and mechanical misalignment. A method is described that employs digital shearing of the heterodyne measurements as a means to remove phase errors. By this method large phase errors can be corrected without requiring a beacon or a glint. This digital shearing laser interferometry method was investigated theoretically and demonstrated via computer simulations which included photon noise and various types of phase errors. The method was also successfully applied to data collected in a simple laboratory experiment.
New algorithms are summarized for recovering an object's intensity distrubution from the second- or third-order autocorrelation function, or equivalently, the Fourier magnitude or bispectnim, of the intensity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.