Homography estimation is often an indispensable step in computer vision tasks that require multi-frame time-domain information. However, when we estimate the traditional homography matrix, the rotational and translational terms are often difficult to balance. In this paper, based on the 4-point homography parameter matrix, we reproduce the Synthetic COCO dataset (S-COCO) and the Photometrically Distorted Synthetic COCO dataset (PDS-COCO). Then, we use the Darknet in YOLOv3 as the backbone to design a deep network for 4-point homography estimation. Experiments show that compared with existing main one-stop methods, our proposed deep learning network achieves the best performance on the S-COCO dataset and excellent performance on the PDS-COCO dataset.
In recent years, with the development of earth observation technology, satellite video can use optical sensors to obtain continuous images from mobile satellite platforms, providing new data for the detection and tracking of large-scale moving targets. At present, moving target detection algorithms have been widely used in ground surveillance video. However, there are many challenges in applying existing target detection algorithms directly to satellite video due to the low resolution of satellite video, non-appearance texture feature of small target, low signal-to-noise ratio, and nonstationary camera platforms, etc. Therefore, we propose a new satellite video moving target detection and tracking framework for this new type of computer vision task. First, we utilize a tensor data structure to exploit the inner spatial and temporal correlation to extract region of interest for target movement. Then, we designed a recognition strategy based on multi-morphology and motion cues to further identify the correct moving targets from the existing noise. Finally, we associate the target detection results of each frame to achieve multi-target tracking. We manually annotated the video data of Jilin-1 satellite, tested the algorithm under different evaluation criteria, and compared the test results with the most advanced benchmark, which proved the advantages of our framework over the benchmark. In addition, the data set can be downloaded from https://github.com/QingyongHu/VISO.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.