In recent years, high-resolution remote sensing image segmentation has become a key task involving technology in many fields, mainly used to accurately extract target information from remote sensing images, and widely used in land detection, coverage classification, etc. These images are characterized by high resolution and large scale, and the overall segmentation requires high performance of hardware devices, the downsampling method usually used is easy to affect the segmentation quality, cropping and slicing the image will lead to the lack of edge information, and the problems of category homogenization, changes in complex scenes, and noise and occlusion make the segmentation challenging. Therefore, in this paper, we propose An iterative attention context fusion network (IACFNet) based on the attention mechanism as well as information fusion at different scales, which iteratively calculates the attention module weights connecting low and high features and refines the problem of spatial information loss, utilizing multiscale segmentation for information complementation, and utilizing an improved boundary loss function to precisely define boundary instances. Our proposed method obtains a performance improvement of about 2.8% and 1.5% in the mean intersection over union (mIoU) metric compared to the current state-of-the-art methods, respectively.
Development of computer vision technologies has been widely used to increase the level of agricultural intelligence. Crop counting, an application of image counting, plays a fundamental role in agricultural information automation. However, the complex cotton field environment is likely lead to incorrect detection of the target position or fragmentation of the segmentation results, resulting in a decrease in counting accuracy. Despite this, computer vision technologies have shown great potential to effectively solve this task. To solve the problem of multimode cotton boll counting in a complicated environment, an in-field cotton boll counting algorithm based on density classification is proposed. First, the algorithm encodes the global context information with a density level classification estimator. Then, the input images are converted into high-dimensional feature maps by density map estimator with a multicolumn structure. Finally, through the feature fusion neural network, the classification information is combined with high-dimensional feature maps to generate a high-quality density map, and then the cotton bolls are counted. In particular, we collected and labeled a cotton boll counting dataset with 758 high-resolution images for experiment and comparison, which can be divided by different environmental conditions and observations sites. In addition, the relationship of cotton yield against the cotton bolls counting can be assessed by adopting our proposed algorithm. Experimental results demonstrate that the proposed algorithm achieves a lower counting error and better effectiveness and robustness than other comparative algorithms.
In order to solve the problem of difficult target matching and low matching efficiency in binocular measurement, this paper proposes a real-time target feature matching algorithm based on Binocular Stereo Vision-absolute window error minimization (CAEW, Calculate the Absolute Error Window ) to improve the speed and accuracy of measurements. Firstly, the calibration of the camera is solved by using Zhang's calibration method, and the Bouguet algorithm is used for Binocular Stereo Vision of the final calibration data. Then, the AdaBoost iterative algorithm is used to train the target detector for target recognition. The CAEW algorithm is compared with the commonly used SURF (Speeded-Up Robust Feature) algorithm. The evaluation data of experimental results showed that the CAEW algorithm can achieve an evaluation of more than 90%. It is significantly improved compared with the SURF algorithm and meet the needs of binocular real-time target matching.
Automatic recognition based on image fusion techniques are widely used to integrate a lower spatial resolution
multispectral image with a higher spatial resolution panchromatic image. The earthquake events were first researched after
the Kocaeli Earthquake of 1999 show that the spatial images from various satellites could be exploited. The remote sensing
which in terms of spatial resolution and data processing open new possibilities concerning the natural hazard assessment.
However, the existing techniques either cannot avoid distorting the image spectral properties or involve complicated and
time-consuming frequency decomposition and re-construction processing. To address these problems, we present our
study on a HIS transform and intensity modulation algorithm. The algorithm is further optimized a proposed objective of
minimizing error rate. Experiments in recognition of building damage due to earthquakes applications show that the
algorithm provides better recognition accuracy than others. Although some environment problems, such as the influence of
sunshine need further research, the proposed method can benefit further study of the application.
This paper is to take the advantage of the imagery produced from QuickBird for damage identification in urban areas.
The buildings collapsed following the Bam earthquake. We present our study results in remote sensing image fusion of
identification of earthquake-caused building harm with HIS transform and intensity modulation. Commencing with the
inventory of buildings as objects within high resolution QuickBird satellite imagery captured before the event. The
number of collapsed buildings is computed based on the unique statistical characteristics of these buildings within the
'after' scene. The promising results from this analysis prove that improving spatial detail and spectral information could
be used as a potential methodology for automated identification of building damage.
Automatic image registration is important for many multiframe-based image analysis applications. With an increasing
number of images collected every day from different sensors, automated registration of multi-sensor/multi-spectral
images has become an important issue. A wide range of registration techniques exists for different types of applications
and data sources, however no algorithm is known that can accurately register multi-source images consistently. This
research addresses this problem by investigating the development of a fully automatic registration system for remote
sensing images. The development of this new automatic image registration method is based on the extraction and
matching of common features that are visible in both images. The algorithm involves the following five steps: noise
removal, edge extraction, edge linking pattern extraction and pattern matching.
This study analyzed texture features in multi-spectral image data. Recent development in the mathematical theory of wavelet transform has received overwhelming attention by the image analysts. An evaluation of the ability of wavelet transform and other texture analysis algorithms in feature extraction and classification was performed in this study. The algorithms examined were the wavelet transform, spatial co-occurrence matrix, fractal analysis, and spatial autocorrelation. The performance of the above approaches with the use of different feature was investigated. Wavelet transform was found to be far more efficient than other advanced spatial methods.
We present a new method by using GHM discrete multiwavelet transform in image denoising on this paper. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising of images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by treating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. We apply the multiwavelet-based to remote sensing image denoising. Multiwavelet transform technique is rather a new method, and it has a big advantage over the other techniques that it less distorts spectral characteristics of the image denoising. The experimental results show that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.
Detection and recognition of the dim moving small targets in the infrared image sequences containing cloud clutter is an
important research area, especially for Infrared Search and Track surveillance applications. In the paper, the author propose a new algorithm having high performance for detecting moving small targets in infrared image sequences containing cloud clutter. The novelty of the algorithm is that it fuses the features of the moving small targets in both the spatial domain and temporal domain. Two independent units can realize the calculative process. Another advantage of the method is that it can get the better detection precision than some other methods. We also present the algorithm based on image fusion and Kalman tracking that can track a number of very small, low constant objects through an image sequence taken from a static camera.
An essential determinant of the value of digital images is their quality. Over the past years, there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet transform and Human visual system. This way the proposed measure differentiates between the random and signal-dependant distortion, which have different effects on human observer. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated for quality evaluation. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.
In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.
In this paper, a new technique for improving the spatial resolution of hyperspectral image data will be presented. This technique combines a high-resolution image with a lower spatial resolution hyperspectral image to produce a product that has the spectral properties of the hyperspectral image at a spatial resolution approaching that of the panchromatic image. Hyperspectral imaging systems are assuming a greater importance for a wide variety of commercial and military systems. There have been several approaches to using a single higher spatial resolution band to improve the spatial resolution of the hyperspectral data. This algorithm offers a new approach to the problem of combining hyperspectral data with high-resolution images, and it is based and generally shows lower levels of error than the statistically based algorithms.
The purpose of image fusion is to merge information from multi-sensor and to improve abilities of information analysis and feature extraction. In this paper, a new image fusion algorithm based on discrete multiwavelet transform to fuse multi-sensor images is presented. The detailed discussions in the paper are focused on CL (Chui-Lian) multiwavelet, a two-wavelet and two-scaling function multiwavelet, and use it to accomplish image fusion processing. The CL multiwavelets have several advantages in comparison with scalar wavelets, so that it is employed to decompose and reconstruct images in this algorithm. When images are merged in multiwavelet space, different frequency ranges are processed differently. It can merge information from original images adequately and improve abilities of information analysis and feature extraction in remote sensing area. The experiments, including the fusion of registered Visible (VIS) \Infrared (IR) images are presented in this paper. Comparing with other image fusion methods, satisfactory result has been obtained by applying this method on both objective and subjective performance measure.
Image fusion refers to the techniques that integrate complementary information from multiple image sensor data such that the new images are more suitable for the purpose of human visual perception and the compute-processing tasks. In this paper, a new image fusion algorithm based on multiple wavelet, namely multiwavelet, transform to fuse multispectral images is presented. Multiwavelets are extensions from scalar wavelet, and have several unique advantages in comparison with scalar wavelets, so that multiwavelet is employed to decompose and reconstruct images in this algorithm. In this paper, the image fusion is performed at the pixel level, other types of image fusion schemes, such as feature or decision fusion, are not considered. In this fusion algorithm, a feature-based fusion rule is used to combine original subimages and to form a pyramid for the fused image. When images are merged in multiwavelet space, different frequency ranges are processed differently. It can merge information from original images adequately and improve abilities of information analysis and feature extraction. The experiment on the fusion of registered multiband SPOT multispectral Panchromatic band \XS3 band images is presented in this paper. The experiment results show that this fusion algorithm, based on multiwavelet transform, is an effective approach in image fusion area.
In this paper, we present a new method by using 2-D discrete multiwavelet transform in image denoising. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising is images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by threating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. The performances of multiwavelets are compared with those of scalar wavelets. Simulations reveal that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.
The purpose of multispectral image fusion is to merge information from multi-sensor and to improve abilities of information analysis and feature extraction. Discrete wavelet transform can offer a more precise way for image analysis than other multi-resolution analysis. It decomposes an image into low frequency band and high frequency band in different level, and it can also be reconstructed gradually in different level. But this method only decomposes low frequency band in a higher scale, so that it omits some useful details of the images. In this paper, we research an improved discrete wavelet transform. It decomposes high frequency band in higher scale which wavelet analysis does not do. We apply it on image data and give a fusion method in pixel level. Through merging remote sensing image of different wavebands from multi-sensor to a same object by applying method of improved wavelet analysis, we have obtained a fused picture. The method can fuse details of input image successfully, and display information of the each input image perfectly. Comparing with other image fusion methods, satisfactory result has been obtained by applying this method on both objective and subjective performance measure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.