Lawrence Livermore National Laboratory is a large, multidisciplinary institution that conducts fundamental
and applied research in the physical sciences. Research programs at the Laboratory run the
gamut from theoretical investigations, to modeling and simulation, to validation through experiment.
Over the years, the Laboratory has developed a substantial research component in the areas of signal
and image processing to support these activities. This paper surveys some of the current research in
signal and image processing at the Laboratory. Of necessity, the paper does not delve deeply into any
one research area, but an extensive citation list is provided for further study of the topics presented.
Overhead persistent surveillance systems are becoming more capable at acquiring wide-field image sequences for long
time-spans. The need to exploit this data is becoming ever greater. The ability to track a single vehicle of interest or to
track all the observable vehicles, which may number in the thousands, over large, cluttered regions while they persist in
the imagery either in real-time or quickly on-demand is very desirable. With this ability we can begin to answer a
number of interesting questions such as, what are normal traffic patterns in a particular region or where did that truck
come from? There are many challenges associated with processing this type of data, some of which we will address in
the paper. Wide-field image sequences are very large with many thousands of pixels on a side and are characterized by
lower resolutions (e.g. worse than 0.5 meters/pixel) and lower frame rates (e.g. a few Hz or less). The objects in the
scenery can vary in size, density, and contrast with respect to the background. At the same time the background scenery
provides a number of clutter sources both man-made and natural. We describe our current implementation of an ultrascale
capable multiple-vehicle tracking algorithm for overhead persistent surveillance imagery as well as discuss the
tracking and timing performance of the currently implemented algorithm which is aimed at utilizing grayscale electrooptical
image sequences alone for the track segment generation.
Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and
launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded
by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured
image information can be recovered using post-processing techniques, the computational complexity of such approaches
has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art
atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded
processing board. The end result is a light-weight, lower-power image processing system that improves the quality of
long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and
analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase
over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.
While imaging over long distances is critical to a number of security and defense applications, such as homeland security
and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent
atmosphere in the path between the region under observation and the imaging system, which can severely degrade
captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image
information; however, the computational complexity of such approaches has prohibited real-time deployment and
hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and
manufactured an embedded image processing system based on commodity hardware which can compensate for these
atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method
coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common
digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the
custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop
PC, in a form-factor that is compact, low-power, and field-deployable.
KEYWORDS: Field programmable gate arrays, Speckle, Video, Digital signal processing, Prototyping, Signal processing, Video processing, Speckle imaging, Algorithm development, Video acceleration
In this paper, we discuss the real-time compensation of air turbulence in imaging through long atmospheric paths. We
propose the use of a reconfigurable hardware platform, specifically field-programmable gate arrays (FPGAs), to reduce
costs and development time, as well as increase flexibility and reusability. We present the results of our acceleration
efforts to date (40x speedup) and our strategy to achieve a real-time, atmospheric compensation solver for highdefinition
video signals.
Obtaining a high-resolution image of an object or scene from a long distance away can be very problematic, even with
the best optical system. This is because atmospheric blurring and distortion will limit the resolution and contrast of
high-resolution imaging systems with substantial sized apertures over horizontal and slant paths. Much of the
horizontal and slant-path surveillance imagery we have previously collected and successfully enhanced has been
collected at visible wavelengths where atmospheric effects are the strongest. Imaging at longer wavelengths has the
benefit of seeing through obscurants or even at night, but even though the atmospheric effects are noticeably reduced,
they are nevertheless present, especially near the ground. This paper will describe our recent work on enhanced infrared
(IR) surveillance using bispectral speckle imaging. Bispectral speckle imaging in this context is an image post-processing
algorithm that aims to solve the atmospheric blurring and distortion problem of imaging through horizontal
or slant path turbulence. A review of the algorithm as well as descriptions of the IR camera and optical systems used in
our data collections will be given. Examples of horizontal and slant-path imagery before and after speckle processing
will also be presented to demonstrate the resolution improvement gained by the processing. Comparisons of IR imagery
to visible wavelength imagery of the same target under the same conditions will be shown to demonstrate the tradeoffs
of going to longer wavelengths.
KEYWORDS: Field programmable gate arrays, Speckle, Digital signal processing, Video, Signal processing, Speckle imaging, Video processing, Prototyping, Algorithm development, Turbulence
In this paper, we discuss the real-time compensation of air turbulence in imaging through long atmospheric paths. We propose the use of a reconfigurable hardware platform, specifically field-programmable gate arrays (FPGAs), to reduce costs and development time, as well as increase flexibility and reusability. We present the results of our acceleration efforts to date (40x speedup) and our strategy to achieve a real-time, atmospheric compensation solver for high-definition video signals.
High-resolution surveillance imaging with apertures greater than a few inches over horizontal or slant paths at optical or infrared wavelengths will typically be limited by atmospheric aberrations. With static targets and static platforms, we have previously demonstrated near-diffraction limited imaging of various targets including personnel and vehicles over horizontal and slant paths ranging from less than a kilometer to many tens of kilometers using adaptations to bispectral speckle imaging techniques. Nominally, these image processing methods require the target to be static with respect to its background during the data acquisition since multiple frames are required. To obtain a sufficient number of frames and also to allow the atmosphere to decorrelate between frames, data acquisition times on the order of one second are needed. Modifications to the original imaging algorithm will be needed to deal with situations where there is relative target to background motion. In this paper, we present an extension of these imaging techniques to accommodate mobile platforms and moving targets.
We have previously demonstrated and reported on the use of sub-field speckle processing for the enhancement of both near and far-range surveillance imagery of people and vehicles that have been degraded by atmospheric turbulence. We have obtained near diffraction-limited imagery in many cases and have shown dramatic image quality improvement in other cases. As it is possible to perform only a limited number of experiments in a limited number of conditions, we have developed a computer simulation capability to aid in the prediction of imaging performance in a wider variation of conditions. Our simulation capability includes the ability to model
extended scenes in distributed turbulence. Of great interest is the effect of the isoplanatic angle on speckle imaging performance as well as on single deformable mirror and multiconjugate adaptive optics system performance. These angles are typically quite small over horizontal and slant paths. This paper will begin to explore these issues which are important for predicting the performance of both passive and active horizontal and slant-path imaging systems.
The difficulty in terrestrial imaging over long horizontal or slant paths is that atmospheric aberrations and distortions reduce the resolution and contrast in images recorded at high resolution. This paper will describe the problem of horizontal-path imaging, briefly cover various methods for imaging over horizontal paths and then describe the speckle imaging method actively being pursued at LLNL.
We will review some closer range (1-3 km range) imagery of people we have already published, as well as show new results of vehicles we have obtained over longer slant-range paths greater than 20 km.
Atmospheric aberrations reduce the resolution and contrast in surveillance images recorded over horizontal or slant paths. This paper describes our recent horizontal and slant-path imaging experiments of extended scenes as well as the results obtained using speckle imaging. The experiments were performed with an 8-inch diameter telescope placed on either a rooftop or hillside and cover ranges of interest from 0.5 km up to 10 km. The scenery includes resolution targets, people, vehicles, and other structures. The improvement in image quality using speckle imaging is dramatic in many cases, and depends significantly upon the atmospheric conditions. We quantify resolution improvement through modulation transfer function measurement comparisons.
Direct detection of photons emitted or reflected by an extrasolar planet is an extremely difficult but extremely exciting application of adaptive optics. Typical contrast levels for an extrasolar planet would be 109 - Jupiter is a billion times fainter than the sun. Current adaptive optics systems can only achieve contrast levels of 106, but so-called extreme adaptive optics systems with 104 -105 degrees of freedom could potentially detect extrasolar planets. We explore the scaling laws defining the performance of these systems, first set out by Angel (1994), and derive a different definition of an optimal system. Our sensitivity predictions are somewhat more pessimistic than the original paper, due largely to slow decorrelation timescales for some noise sources, though choosing to site an ExAO system at a location with exceptional r0 (e.g. Mauna Kea) can offset this. We also explore the effects of segment aberrations in a Keck-like telescope on ExAO; although the effects are significant, they can be mitigated through Lyot coronagraphy.
We have developed a high-resolution wavefront control system based on an optically addressed nematic liquid crystal spatial light modulator with several hundred thousand phase control points, a Shack-Hartmann wavefront sensor with two thousand subapertures, and an efficient reconstruction algorithm using Fourier transform techniques. We present quantitative results of experiments to characterize the performance of this system.
The wavefront controller for the Keck Observatory AO system consists of two separate real-time control loops: a tip-tilt control loop to remove tilt from the incoming wavefront, and a deformable mirror control loop to remove higher-order aberrations. In this paper, we describe these control loops and analyze their performance using diagnostic data acquired during the integration and testing of the AO system on the telescope. Disturbance rejection curves for the controllers are calculated from the experimental data and compared to theory. The residual wavefront errors due to control loop bandwidth are also calculated from the data, and possible improvements to the controller performance are discussed.
Any adaptive optics system must be calibrated with respect to internal aberrations in order for it to properly correct the starlight before it enters the science camera. Typical internal calibration consists of using a point source stimulus at the input to the AO system and recording the wavefront at the output. Two methods for such calibration have been implemented on the adaptive optics system at Lick Observatory. The first technique, Phase Diversity, consists of taking out of focus images with the science camera and using an iterative algorithm to estimate the system wavefront. A second technique sues a newly installed instrument, the Phase-Shifting Diffraction Interferometer, which has the promise of providing very high accuracy wavefront measurements. During observing campaigns in 1998, both of these methods were used for initial calibrations. In this paper we present results and compare the two methods in regard to accuracy and their practical aspects.
Results of experiments with the laser guide star adaptive optics system on the 3-meter Shane telescope at Lick Observatory have demonstrated a factor of 4 performance improvement over previous results. Stellar images recorded at a wavelength of 2 micrometers were corrected to over 40 percent of the theoretical diffraction-limited peak intensity. For the previous two years, this sodium-layer laser guide star system has corrected stellar images at this wavelength to approximately 10 percent of the theoretical peak intensity limit. After a campaign to improve the beam quality of the laser system, and to improve calibration accuracy and stability of the adaptive optics system using new techniques for phase retrieval and phase-shifting diffraction interferometry, the system performance has been substantially increased. The next step will be to use the Lick system for astronomical science observations, and to demonstrate this level of performance with the new system being installed on the 10-meter Keck II telescope.
We have developed and tested a method for minimizing static aberrations in adaptive optics systems. In order to correct the static phase aberrations, we need to measure the aberrations through the entire system. We have experimental setup demonstrating that phase retrieval can improve the static aberrations to below the 20 nm rms level, with the limiting factor being local turbulence in the AO system. Experimentally thus far, we have improved the static aberrations down to the 50 nm level, with the limiting factor being the ability to adjust the deformable mirror. This should be improved with better control algorithms now being implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.