Light Detection and Ranging (LiDAR) sensors often encounter challenges with ambient light interference in outdoor settings, leading to depth measurement distortions. Traditional solutions involve additional hardware, causing cost, size, and optimization issues. This paper introduces a hardware-free approach: the multi-tap parallel-phase demodulation method with MEMS scanning LiDAR. Using on-off pulse-waveform modulation based on the amplitude-modulated continuous wave (AMCW) principle, the proposed method suppresses ambient light by multiplying sections of the received laser signal corresponding to off-modulation clock timing by zero. Controlling the duty ratio of the modulation signal drastically reduces ambient light, ideally reaching 1/10 to 1/100. To address harmonics error, this paper utilizes 20 taps to increase the dominant order of harmonics, naturally reducing depth errors. Experimental results demonstrate the efficacy of the method, reducing the original 8 mm depth standard deviation to less than 5 mm at 2.5 m. The cm-scale harmonics error is also reduced to the mm scale within the depth range of 1 m to 4 m. This method proves effective for suppressing ambient light interference in outdoor LiDAR applications.
In this paper, a novel 6-Dimensional (3 positions and 3 orientations) pose estimation system using indirect Time-of- Flight (ToF) region-scanning LiDAR is proposed for long-distance space object recognition. Specifically, the targeted space object detection algorithm with an IR amplitude image, and 6D pose estimation algorithm with high-resolution 3D data are performed with the region-scanning LiDAR. The proposed system is verified with a self-collected dataset of space objects in space simulation environments. The proposed pose estimation algorithm shows a maximum position error of 1% and orientation error of 5° in robust experiment conditions. Moreover, the proposed system outperformed the other conventional space object 6D pose estimation system with previous depth sensors in terms of the detection rates and 6D pose accuracy at long distances.
In this paper, a fast and robust infrared remote target detection network is proposed based on deep learning. Furthermore, we construct our own IR image database imitating humans in remote maritime rescue situations using FLIR M232 IR camera. First, IR image is preprocessed with contrast enhancement for data augmentation and to increase Signal-to-Noise Ratio (SNR). Second, multi-scale feature extraction is performed combined with fixed weighted kernels and convolutional neural network layers. Lastly, the feature map is mapped into a likelihood map indicating the potential locations of the targets. Experimental results reveal that the proposed method can detect remote targets even under complex backgrounds surpassing the previous methods by a significant margin of +0.62 in terms of mIOU.
In this paper, a novel 3D face recognition system utilizing the MEMS-based indirect Time-of-Flight (ToF) region-scanning LiDAR is proposed for long-distance person identification. Specifically, this face recognition system consists of two parts: (1) detection of the targeted face region by IR amplitude image and (2) 3D face recognition with the high-resolution face data of region-scanning. The proposed system is carried out on the self-collected dataset and gets maximum Rank-1 recognition rate of 95% in various distance and illumination conditions. Moreover, the proposed system outperformed the other 3D face recognition system with conventional ToF sensors in the aspect of the Rank-1 recognition rates at long distance of more than 3 meters.
This paper suggests a MEMS-based indirect time-of-flight (ToF) scanning light detection and ranging (LiDAR) system with parallel-phase demodulation. Based on the parallel-phase demodulation which extremely reduces the integration time maintaining high demodulation contrast, the proposed LiDAR can acquire accurate depth images with mean absolute error (MAE) about 1.5 cm at the distance of 1.85 m using 20 mW laser power. Meanwhile, MAE due to multipath interference (MPI) of the proposed LiDAR originally about 1.5 cm could be further reduced to less than 8 mm using support vector regression (SVR).
KEYWORDS: 3D image processing, Cameras, 3D-TOF imaging, Time of flight cameras, Super resolution, Image resolution, Image processing, 3D metrology, Image sensors
Time-of-flight (ToF) measurement sensor is widely used to measure 3D depth. However, conventional ToF cameras has relatively low resolution compared to the RGB camera. To utilize such depth image of low resolution effectively in various research fields, low resolution depth image of ToF sensor should be increased. Meanwhile, ToF sensor also has problem related saturated pixels and missing pixels. A novel depth completion algorithm is proposed in this paper to improve the 3D depth image of ToF camera in terms of image resolution and abnormal pixels. Specifically, low resolution depth images and relatively high resolution RGB images are fused in machine learning architecture. The performance of this proposed depth completion algorithm is demonstrated under various experimental conditions.
Amplitude-modulated continuous wave (AMCW) time-of-flight (ToF) sensor is widely used to capture 3D information of objects due to its relatively high measurement precision in short range. However, the measurement accuracy of AMCW ToF measurement method is generally sensitive to the reflectivity of object, internal stray light, modulation instability, and external light. Consequently, distance measurement error inevitably occurs even in indoor measurement condition. To compensate such error, a post processing method based on machine learning is proposed in this paper. This data driven correction method is validated under indoor measurement condition. According to the experimental results, the distance measurement error correction method presented in this paper shows the most high accuracy compared to other related research results.
With the increasing demand for 3D depth information in various industrial applications, light detection and ranging (LiDAR) has been emerged as one of the solutions to measure the distance of objects. However, existing AMCW-based indirect ToF sensors have problems with measurement accuracy since the measured depth is sensitive to unwanted error sources such as ambient light, wide-band random noises, and stray light. In this paper, the effects of such stray light i.e. systematic error source are thoroughly analyzed in a cause-and-effect manner in terms of the signal’s amplitude and measured phase changes. Furthermore, a pre-compensation method to remove the effects of stray light is validated under various practical experimental conditions. According to the experimental results, the proposed pre-compensation method improves the measurement accuracy with mm-level depth error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.