This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to
investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of
feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a
quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude
(pointing) information. In particular, tie points are automatically measured between adjacent frames using standard
optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based
on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from
motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed.
Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error
propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check
points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used
other than for evaluation of solution errors and corresponding accuracy.
|