Surveillance imaging from long-range requires use of telescopic optics, and fast electro-optic sensors. The intervening air introduces distortion of the imagery and its spatial frequency content, and does so such that regions of the image suffer dissimilar distortion, visible in the first instance as a time varying geometrical warp, and then as region specific blurring or "speckle". The severity of this, and hence the reduction in size of regions exhibiting similar distortion, is a function of the field of view of the telescope, the height above ground of the imaging path, the range to the target, and climatic conditions.
Image processing algorithms must be run on the sequence of imagery to correct these distortions, on the assumption that exposure time has effectively "frozen" the turbulence. These are absent of knowledge of the actual scene under investigation. Successful algorithms do manage to correct the apparent warping, and in doing so they yield both information on the bulk turbulent medium, and allow for reconstruction of spatial frequency content of the scene that would have been lost by the capability of the optics had their been no turbulence. This is known as turbulence-induced super-resolution.
To confirm the success of algorithms in both correction and reconstruction of such super-resolution we have devised a field experiment where the truth image is known and which uses other methods to evaluate the turbulence for collaboration of the results. We report here a new algorithm, which has proved successful in satellite remote sensing, for restoring this imagery to quality beyond the diffraction limits set by the optics.
In this paper, a new image denoising method which is based on the uHMT(universal Hidden Markov Tree) model in
the wavelet domain is proposed. The MAP (Maximum a Posteriori) estimate is adopted to deal with the ill-conditioned
problem (such as image denoising) in the wavelet domain. The uHMT model in the wavelet domain is applied to construct
a prior model for the MAP estimate. By using the optimization method Conjugate Gradient, the closest approximation to
the true result is achieved. The results show that images restored by our method are much better and sharper than other
methods not only visually but also quantitatively.
Super-resolution (SR) recovery has become an important research area for remote sensing images ever since T.S. Huang
first published his frequency method in 1984. Because of the development of computer technology, more and more
efficient algorithms have been put forward in recent years. The Iteration Back Projection (IBP) method is one of the
popular methods with SR. In this paper, a modified IBP is proposed for Advanced Land Observing Satellite (ALOS)
imagery. ALOS is the Japanese satellite launched in January 2006 and carries three sensors: Panchromatic Remote-sensing
Instrument of Stereo Mapping (PRISM), Advanced Visible and Near Infrared Radiometer type-2 (AVNIR-2)
and Phased Array type L-band Synthetic Aperture Radar (PALSAR). The PRISM has three independent optical systems
for viewing nadir, forward and backward so as to produce a stereoscopic image along the satellite's track. While PRISM
is mainly used to construct a 3-D scene, here we use these three panchromatic low-resolution (LR) images captured by
nadir, backward and forward sensors to reconstruct one SR image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.