Open Access
1 February 2007 Fusion of multispectral and panchromatic satellite images based on contourlet transform and local average gradient
Haohao Song, Songyu Yu, Li Song, Xiaokang Yang
Author Affiliations +
Abstract
Recent studies show that wavelet-based image fusion methods provide a high spectral quality in fused satellite images. However, images fused by most wavelet-based methods have less spatial resolution because the critical downsampling is included in the wavelet transform. We propose a useful fusion method based on contourlet and local average gradient (LAG) for multispectral and panchromatic satellite images. Contourlet represents edges and texture better than wavelet. Because edges and texture are fundamental in image representation, enhancing them is an effective means of enhancing spatial resolution. Based on LAG, the proposed fusion method reduces the spectral distortion of the fused image further. Experimental results show that the proposed fusion method is able to increase the spatial resolution and reduce the spectral distortion of the fused image at the same time.

1.

Introduction

In many remote sensing and mapping applications that require both high spatial and high spectral resolution, the fusion of panchromatic (Pan) images with a high spatial and low spectral resolution and multispectral (MS) images with a low spatial and high spectral resolution is an important issue.

To date, various techniques for fusing images have been developed. The well-known methods include the intensity-hue-saturation (IHS) transform, Brovey transform, and principal component analysis (PCA). However, a limitation of these methods is that some distortion occurs in the spectral characteristics of the original MS images. Recently, developments in wavelet analysis have provided a potential solution to this problem. The wavelet-based methods are based on the principle of extracting from a Pan image the detailed spatial information that is not presented in the corresponding MS image; this information is later injected into the MS image in a multiresolution framework.1, 2

The wavelet-based method of image fusion provides a high spectral quality in fused satellite images. However, images fused by wavelets have much less spatial resolution because the critical downsampling is included in the wavelet transform. When applying remote sensing to mapping, photogrammetric measurement, or for interpretation purposes, the spatial information of a fused image is just as important as the spectral information. We therefore need to develop an advanced method of image fusion so that the fused image has the high spectral resolution of the MS image and the high spatial resolution of the Pan image at the same time.

Recently, other multiresolution analyses have been developed, including contourlets. Contourlet transform (CT) is expanded to images using basic elements such as contour segments. Contourlet was developed as an improvement over wavelet in terms of this inefficiency. The resulting transform has the multiscale and time, frequency, and localization properties of wavelet, but also offers a high degree of directionality and anisotropy. Specifically, CT involves the basis functions that are oriented at any power-of-two number of directions with flexible aspect ratios. With such a rich set of basis functions, contourlet can represent edges and textures in images better compared with wavelet.3, 4

In this paper, we propose a new image fusion method based on CT and local average gradient (LAG). To the best of our knowledge, this is first time contourlet has been applied to this field. Taking the advantages of contourlet, the proposed fusion method has the higher spatial resolution than the wavelet-based fusion methods. Based on LAG, the proposed fusion method reduces the spectral distortion of the fused image by selecting either the high frequency (HF) coefficients of the MS image or the corresponding HF coefficients of the Pan image conditionally. Our fusion method is able to increase spatial resolution and reduce spectral distortion of the fused image at the same time.

2.

The Proposed Image Fusion Method

2.1.

LAG

In the proposed fusion method, LAG is used to represent the perceptually meaningful image structures (edges and texture) in HF subbands of MS and Pan images. The LAG is defined as

Eq. 1

LAG(i,j)={[dIW(i,j)di]2+[dIW(i,j)dj]2}12.
Here IW(i,j)=I(i,j) , (i,j)W , and the W represents a window with an adaptive width; dIW(i,j)di and dIW(i,j)dj ace one-order differentials of image IW(i,j) at i and j directions, respectively. The window W makes the average gradient of the position (i,j) localization. The size of the window varies with the size of the HF subbands to exactly represent the local changing of HF subbands further.

A Pan image includes a mass of edges and texture and has the high spatial resolution. A MS image has the low spatial resolution. Accordingly, the Pan image has the larger LAGs than the MS image in general. We should select the HF coefficients with larger LAGs for the fused image to have the high spatial resolution. Conversely, we should select the HF coefficients of the MS image to reduce the spectral distortion as much as possible when the LAGs of HF coefficients in the Pan image are close to that in the MS image.

2.2.

Modified Contourlet Transform

The downsampling is included in the distributed feedback (DFB) of CT. The size of each direction subband is a quarter of the size of the original HF image in each scale. The spatial resolution of each HF subband is reduced in the processing of downsampling. To calculate the LAGs of HF coefficients at a spatial resolution as high as possible, it is not desired that the downsampling is included in DFB.

To achieve the HF subbands with high spatial resolution, and modify CT as simply as possible, we transfer the operator of downsampling from the decomposition end to the reconstruction end and propose a modified contourlet transform (MCT). In other words, at the decomposition end, the downsampling is not performed to calculate the LAGs of HF coefficients at a high spatial resolution; then at the reconstruction end, the downsampling is performed before the HF subbands are fed into DFB. In this way, the HF subbands with high spatial resolution are achieved at the decomposition end by simply modifying CT.

2.3.

Conditionally Selecting HF Coefficients in either the MS or Pan Images

For the fused image to have the high spectral resolution of the MS image and the high spatial resolution of the Pan image at the same time, we should (1) select the low frequency (LF) coefficients of the MS image that embodies the main spectral information of the MS image, (2) select the HF coefficients with large LAGs that embodies the main changes in images (e.g., edges and texture), (3) select the HF coefficients of the MS image when the LAGs of HF coefficients in both images are close.

The first two conditions of selecting coefficients for the fused image are quite comprehensible. The last condition is analyzed below. In general, the LAGs of the HF coefficient in the Pan image is larger than that in the MS image due to the high spatial resolution in the former. However, when there are the smooth regions in both images, the LAGs of HF coefficients in the Pan image are close to that in the MS image. For the fused image to have the much closer spectral resolution of the MS image, we should select the HF coefficients of the MS image.

Based on the above analysis, we select the LF coefficients and HF coefficients for the fused image in modified contourlet domain according to Eqs. 2, 3, respectively.

Eq. 2

FLF(i,j)=MSLF(i,j),

Eq. 3

FHF(i,j)={PanHF(i,j),ifLADPan(i,j)LADMS(i,j)> Ts,dMSHF(i,j),ifLADPan(i,j)LADMS(i,j)Ts,d.
Here, FLF(i,j) , FHF(i,j) , MSLF(i,j) , MSHF(i,j) , PanLF(i,j) , PanHF(i,j) are the LF coefficients and HF coefficients of the fused image, MS image, and Pan image respectively; LAGMS(i,j) and LAGPan(i,j) are the LAGs of the MS and the Pan images, respectively; Ts,d is the threshold at the s -scale and d -direction subband by which we control the tradeoff between the spatial resolution and the spectral resolution for the fused image.

If Ts,d is selected as a larger value, the fused image will have a higher spectral resolution but lower spatial resolution; on the contrary, if Ts,d is selected as a smaller value, the fused image will have a higher spatial resolution but lower spectral resolution.

2.4.

Outline of Proposed Image Fusion Method

The flowchart of our proposed image fusion method is presented in its entirety as Fig. 1.

Fig. 1

Flowchart of the proposed image fusion method.

020502_1_1.jpg

3.

Experimental Results

To validate the efficiency of our proposed image fusion method, we conducted the tests on QuickBird images. The Pan image has a typical resolution of 0.7m while the MS image has a resolution of 2.8m . Figure 2a shows the Pan image, and Fig. 2b shows the original MS image of the blue, green, and red bands. The MS image was registered to the Pan image and resampled to 0.7m .

Fig. 2

QuickBird images: (a) Pan image, (b) MS image.

020502_1_2.jpg

In our fusion method, Pan and MS images are transformed by 5-scale and 4-direction MCT. The filters should be short to keep the computational burden low and avoid smearing image details; therefore, the 5/3 biorthogonal filters are used for LP and the directional decomposition. Because the size of HF subbands decreases with the increase of scale, the size of window W used to compute LAG is 8×8 at the first scale, 4×4 at the second scale, and 2×2 at the other scales. In addition, Ts,d=100 for each subband at all HF scales in our experiments for simplification.

From the results of the proposed image fusion method presented in Fig. 3b, the fused image contains structural details of the Pan image’s higher spatial resolution and rich spectral information from the MS images. Moreover, compared with the results of image fusion obtained by wavelet shown as Fig. 3a, the results of the proposed image fusion method have better visual effect.

Fig. 3

Comparison of experimental results: (a) wavelet-based fusion method, (b) proposed fusion method.

020502_1_3.jpg

In addition to visual analysis, we conducted a quantitative analysis. Our analysis of the experimental results is based on the two factors: the standard deviation (SD) and the relative average spectral error (RASE).5

Using the two factors, Table 1 compares the experimental results of image fusion for the proposed method, the wavelet-based method, and the IHS method.

Table 1

Comparison of image fusion methods by IHS, wavelet, and contourlet.

Fusion MethodSDRASE
R BandG BandB Band
IHS0.79230.64110.39070.9870
Wavelet0.73460.59920.36170.7550
Contourlet0.73220.59240.35680.7483

Experimental results show that the proposed method of image fusion provides more detailed spatial information than the wavelet-based image fusion method and, simultaneously, preserves the richer spectral content of the MS image than both IHS and the wavelet-base image fusion methods.

4.

Conclusions

We have presented a new method of image fusion based on the concept of LAG by using contourlet transform. Because it represents edges and texture better than wavelet, the contourlet is well suited for image fusion to achieve a fused image with the high spatial and spectral resolution. Based on experimental results, the proposed method provides better visual and quantitative results for remote sensing fusion.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable comments. This work is partially supported by the Research Fund for the Doctoral Program of Higher Education (RFDP) under Grant No. 20040248047 and National Science Foundation of China (NSFC) under Grants No. 60332030 and 60502034.

References

1. 

M. Gonzalez-Audicana, J. Saleta, R. Catalan, and R. Garcia, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Remote Sens., GE–42 1291 –1299 (2004). 0196-2892 Google Scholar

2. 

X. Otazu, M. Gonzalez-Audicana, O. Fors, and J. Nunez, “Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods,” IEEE Trans. Geosci. Remote Sens., GE–43 2376 –2385 (2005). 0196-2892 Google Scholar

3. 

D. Po and M. Do, “Directional multiscale modeling of images using the contourlet transform,” IEEE Trans. Image Process., 15 1610 –1620 (2006). https://doi.org/10.1109/TIP.2006.873450 1057-7149 Google Scholar

4. 

M. Do and M. Vetterli, “The contourlet transform: An efficient directional multiresolution image representation,” IEEE Trans. Image Process., 14 2091 –2106 (2005). https://doi.org/10.1109/TIP.2005.859376 1057-7149 Google Scholar

5. 

L. Wald, T. Ranchin, and M. Mangolini, “Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images,” Photogramm. Eng. Remote Sens., 63 691 –699 (1997). 0099-1112 Google Scholar
©(2007) Society of Photo-Optical Instrumentation Engineers (SPIE)
Haohao Song, Songyu Yu, Li Song, and Xiaokang Yang "Fusion of multispectral and panchromatic satellite images based on contourlet transform and local average gradient," Optical Engineering 46(2), 020502 (1 February 2007). https://doi.org/10.1117/1.2437125
Published: 1 February 2007
Lens.org Logo
CITATIONS
Cited by 31 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Spatial resolution

Wavelets

Earth observing sensors

Satellite imaging

Satellites

Spectral resolution

RELATED CONTENT


Back to Top