With the development of Geospatial technology and remote sensing technology, a large number of remote sensing images come into application with assist of computers. Although convolutional networks have great performance in computer vision, features extracted by convolutional network doesn’t have the characteristic of rotation invariance, which means the current neural network methods can’t adapt to rotated objects. Considering the multiangle characteristics of remote sensing images, we proposed a Rotation Invariance Spatial transformation Network (RI-STNET) to extract the rotation invariance object features. RI-ST-NET combines convolutional neural networks and the Spatial Transformer Networks (STN) rotating the object to an angle which more easily to identify and is trained by means of Siamese network sharing the same weights of two network branches . Thus RI-ST-NET can adapt to the object features of different rotation patterns which then improved that effectively promote the accuracy of remote sensing retrieval. A Rotation Invariance Spatial Transformation Network combines the advantages of STN and tuple training which can catch the rotation of the same object when used in image retrieval task. A series of evaluation contrast experiments on chosen dataset demonstrate the performance of the proposed method.
With the increasing amount of high-resolution remote sensing images, large-scale remote sensing image retrieval(RSIR) becomes more and more significant and has attracted great attention. Traditional image retrieval methods generally use hand-crafted features which are not only time-consuming but also always get poor performance. Deep learning recently achieves remarkable performance due to its powerful ability to learn high-level semantic features, so researchers attempt to take advantage of features derived from Convolutional Neural Networks(CNNs) in RSIR. But remote sensing image is different from natural scene image, its background is more complicated with a lot of noise and existing deep learning method didn’t handle this well. Both the speed and the accuracy achieve unsatisfactory performance. In this paper, we propose a rotation invariant hashing network that represents an image as a binary hash code to retrieve image faster while considering the rotation invariance of the same target. The results of the experiments on some available remote sensing datasets show that our method is effective and outperforms than other features which is usually used in RSIR.
Object detection is one of the most important issues in the field of remote sensing analysis. The lack of semantic information about objects poses difficulty for traditional methods in exploring effective features for object discrimination. Being capable of feature extraction, a series of region-based convolutional neural networks (R-CNN) have been widely and successfully applied for object detection in natural images recently. However, most of them suffer from the poor detection performance of small-sized targets, which means that few of them can be introduced directly for small-sized object detection in remote sensing images. This paper proposes a modified method based on faster R-CNN, which is composed of a feature extraction network, a region proposal network and an object detection network. Compared to faster R-CNN, in the feature extraction network, the proposed method removes the forth pooling layer and employs dilated convolutions on the all subsequent convolutional layers to enhance the resolution of the final feature maps, which provide more detailed and semantic feature information of targets to help detect objects especially the small-sized one. In the object detection network, contextual features around the region proposals are added as complement feature information to help distinguish objects accurately. Experiments conducted on two data sets verify that our proposal obtains a superior performance on small-sized object detection in remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.