In order to solve the requirements of autonomous recognition of space targets from far to near in orbit missions, this paper studies the autonomous recognition technology of space targets based on artificial intelligence edge devices that can be deployed on satellite. According to the imaging characteristics of the target in the visible light image in the approach process, the long-distance point target recognition algorithm and the near-range target component recognition algorithm based on the visible light image are studied in detail, and the embedded software for autonomous recognition of space targets is further developed based on the artificial intelligence chip, and on this basis, the target recognition embedded system is built, which has the capabilities of visible light image acquisition, autonomous recognition of space non-cooperative targets and target information output, and assists the mission satellite to complete the approach mission.
In this paper, a monocular vision measurement method based on feature detection and extraction and PnP algorithm is used to solve the relative pose of the spatial target, which is used to quickly and accurately determine the positional relationship between the two targets in space. Based on the structure of the spatial target, the pixel coordinates of four coplanar feature points and two non-coplanar feature points on the spatial target are extracted to the image using image processing methods, and then combined with the PnP problem based on six feature points is solved by combining the dimensional information of the spatial target, and the relative positions of the camera and the spatial target are solved. The experimental analysis shows that the spatial target position calculated by this method is close to the real result, which verifies the effectiveness of this method.
Monocular pose estimation is a basic element in computer vision, and monocular pose estimation based on point features, as one of the important branches of monocular pose estimation, is widely used in many fields such as robot positioning, virtual reality, and image precision measurement. When solving the monocular pose based on point features, the number of point features and image noise has a great influence on the estimation accuracy. Therefore, this paper proposes a robust orthogonal iterative pose measurement algorithm that introduces an intermediate coordinate system between the camera coordinate system and the world coordinate system to optimize estimation constraints. Then, the method that uses the least square method to solve the error problem is applied to solving the distance between the characteristic point and the camera's optical center. Moreover, the calculation of the initial value of the camera pose is simplified through the intermediate coordinate system. Finally, the camera pose is optimized by orthogonal iteration. Experiments show that, compared with existing algorithms, the algorithm proposed in this paper is more robust to the number of point features and image noise, and the overall solution accuracy is better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.