KEYWORDS: Orthophoto maps, 3D modeling, Unmanned aerial vehicles, Data modeling, Buildings, RGB color model, Image processing, Education and training, Image quality, Point clouds
True orthophotos have become one of the most important spatial foundational geographic information due to their accurate geometric information and realistic texture features. The traditional method of producing true orthophotos is limited by the accuracy of digital elevation model (DEM)/digital surface model (DSM)/three-dimensional (3D) models. It inevitably has non-orthophoto issues, such as facade distortions and edge deformations, especially in areas with buildings. To address these challenges, we propose an innovative method that leverages a neural radiance field integrated with multi-resolution hash encoding. This method generates orthophotos directly from multi-view images and does not require additional data such as DEM, DSM, or 3D models. In comparison with the existing methods, our experimental results have achieved high-quality orthophotos, addressing various issues such as building facade distortions, building edge deformations, tree area deformations, and the presence of moving objects in orthophotos. In addition, evaluating the orthophoto generation method requires a ground truth dataset. We offer an open-source dataset containing multi-view images of unmanned aerial vehicle scenes and corresponding ground truth orthophotos of the area. This dataset comprises real-world and synthetic data, encompassing diverse scenes such as cities, trees, lakes, and more. It can be used for a comprehensive assessment of the quality of orthophotos produced by different methods. Our proposed method has been thoroughly evaluated using a wide range of challenging datasets. The experimental results demonstrate that our method outperforms traditional algorithms and the relatively latest method.
In this study, a general transformation-based framework for unmanned aerial vehicle image stitching is proposed that can resist global distortion while improving the local registration accuracy. In the first step, with tie points as constraints, the global transformation function of each image is obtained in an optimization manner and no reference image is needed. In the second step, to reduce data redundancy, an image selection algorithm based on information entropy is proposed. The optimal image combination covering the entire scene is selected from the original image set. In the third step, a local optimization algorithm based on mesh deformation is proposed to further optimize the registration accuracy of the optimal image combination. Finally, all of the images are combined to obtain a high-resolution panorama. On challenging datasets, the proposed algorithm can not only reduce the global distortions caused by the accumulation of errors during image stitching, but can also reduce the redundant data, which will benefit the postprocessing. The local mesh optimization can greatly improve the registration accuracy and eliminate the obvious misalignment problems. Tested on a large number of challenging datasets, the proposed method substantially outperforms several state-of-the-art image-stitching methods and commercial software.
KEYWORDS: Orthophoto maps, 3D modeling, RGB color model, Solid modeling, Cameras, 3D image processing, Geographic information systems, Visibility, Data modeling, Error analysis
The orthophoto with refined details and higher accuracy is important for urban geographical information system. The traditional differential rectification did not consider the height information of buildings when dealing with imagery over urban areas, resulting in buildings having relief displacement and cannot be located at their true geographical positions. In this study, a digital building model (DBM)-based procedure for automatic true orthophoto generation is proposed to solve this problem. This procedure includes three major steps: (1) traditional orthophotos generation, (2) buildings relief correction, and (3) occlusion detection and compensation. In our method, the relief displacements for buildings are corrected and occlusions are detected by using the backprojection and intersection method based on vector DBM surface polygon. True orthophotos are obtained with the compensation of occlusions. Experimental results show that the generated true orthophotos can achieve root-mean-square errors of 0.149 and 0.061 m on the X- and Y-axes, respectively. The planimetric positioning accuracy of the true orthophoto is around 1 pixel. This indicates that the proposed method can correctly remove the displacement caused by terrain and tall buildings, and the occluded areas can be detected and compensated effectively for generating true orthophotos with high quality.
This paper describes an algorithm framework for registration of airborne based laser scanning data (LIDAR) and optical images by using multiple types of geometric features. The technique utilizes 2D/3D correspondences between points and lines, and it could easily be extended to general features. In generalized point photogrammetry, all lines and curves are consists of points, which could be describe in collinear equation, so it could represent all kinds of homogeneous features in an uniform framework. For many overlapping images in a block, the images are registered to the laser data by the hybrid block adjustment based on an integrated optimization procedure. In addition to the theoretical method , the paper presents a experimental analysis the sensitivity and robustness of this approach
This paper describes an algorithm framework for fusion airborne based laser scanning data (LIDAR) and optical images.
A efficient and reliable intensity-based registration framework has been used to determining the spatial transform form
LIDAR to optical images. On the basis of segmented airborne images, the paper raises the arithmetic and process of
merging multi-data sources to carry through classification by using multi-echo, point's space discrete characteristics, and
the statistic spectrum characteristics. In addition to the theoretical method, the paper presents a experimental analysis the
sensitivity and robustness of this approach to assess effectiveness the proposed arithmetic.
This paper describes an algorithm framework for automatic registration of airborne based laser scanning data (LIDAR)
and optical images by using mutual information. The part on methodology describes aspects such as pre-processing of
images, intensity value interpolation, optimization strategy, adaptations to the mutual information measure, and a
progressive registration procedure. In addition to the theoretical method, the paper presents a experimental analysis
based on the quality of fit of final alignment between the LIDAR and digital imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.