In this paper, we propose an efficient approach to improve moving object detection accuracy which is adaptive to changes in object size according to the altitude of the aircraft. The proposed algorithm can effectively detect moving objects with various pixel-scale from 8x8 to 100x100 in full-HD motion imagery. It consists of two stages which are detection and fusion. At the first stage, two algorithms are performed simultaneously: One is one-stage object detection network for detecting large objects, and the other is optical flow method for detecting small moving objects. In the second stage, results from the first stage are fused with a Ground Sample Distance (GSD) of imagery. We have conducted experiments using aerial imagery taken at a height between 130 meters and 400 meters. We evaluated the detection performance of our method in terms of precision, recall and normalized multiple object detection accuracy (N-MODA). Through experiments, we proved that the superiority of the proposed method.
Moving object detection from UAV/aerial images is one of the essential tasks in surveillance systems. However, most of the works did not take account of the characteristics of oblique images. Also, many methods use future frames to detect moving objects in the current frame, which causes delayed detection. In this paper, we propose a deep learning based moving object detection method for oblique images without using future frames. Our network has a CNN (Convolutional Neural Network) architecture with the first and second layer containing sublayers with different kernel sizes. These sublayers play a role in detecting objects with different sizes or speeds, which is very important because objects that are closer to the camera look bigger and faster in oblique images. Our network takes the past five frames registered with respect to the last frame and produces a heatmap prediction for moving objects. Finally, we process a threshold processing to distinguish between object pixels and non-object pixels. We present the experimental results on our dataset. It contains about 15,000 images for training and about 6,000 images for testing with ground truth annotations for moving objects. We demonstrate that our method shows a better performance than some previous works.
Detecting targets in aerial imagery plays an important role in military reconnaissance and defense. One of the main difficulties in aerial imagery detection at a wide range of height is instability that detection is performed well only to the test data obtained at the same height range with the training data. To solve this problem, we utilize the sensor metadata to calculate GSD (Ground Sample Distance) and pixel size of the vehicle in our test images which are dependent on height. Based on this information, we estimate the optimal ratio for image preprocessing and adjust it to the test images. As a result, it detects the vehicles taken at a range of 100m to 300m with a higher F1-score than other approach which doesn’t consider the metadata.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.