Non-line-of-sight (NLOS) imaging has been a hot research field recently. Time-of-flight-based (ToF-based) active algorithms are one of the bases for NLOS, which is also the focus of this paper. In the preliminary experiments, Filtered-back-projection (FBP), Light-cone-transformation (LCT), and F-K migration algorithms have shown some shortcomings. For instance, the performance of FBP is poor when it is applied to datasets with low spatial resolution. For objects dominated by specular reflections, LCT generates a significant amount of noise. Similarly, F-K migration produces noisy results when it is employed with low spatial resolution data. To overcome the limitations of these algorithms, we study windowed Fourier transform for NLOS imaging. Experiments are used to analyze the performance of different windowing techniques. From 2D to 3D, and from time to frequency domain, we apply Hanning windows with FBP, LCT, and F-K algorithms. The results demonstrate that, compared to time domain, the performance of an algorithm using windows in frequency domain is significantly enhanced. The reconstructions become significantly clearer. Previously unrecoverable contours are revealed. Image noise is greatly reduced. Then, we employ a set of 3D Kaiser windows with various coefficients in the frequency domain for reconstruction, as a comparison to Hanning windows. We find that the Hanning window function and Kaiser windows with β in the range from 4 to 9 best suits the NLOS imaging problem.
Full-waveform LIDAR is able to record the entire backscattered signal of each laser pulse, thus can obtain detailed information of the illuminated surface. In full-waveform LIDAR, system resolution is restricted by source pulse width and a data acquisition device bandwidth. To improve system-ranging resolution, we discuss a temporal super-resolution system with a deep learning network in this paper. In full waveform LiDAR system, When the emitted laser beam contact with different target, each time the emitted laser beam separates into a reflected echo signal and a transmission beam, the transmission beam travels in the same direction as the emitted laser. Until the transmission beam reach the ground, part of it will be absorbed by the ground and the other will become the final echo signal. Each beam transport in a different distance, and the backscattered beam will be collected and digitized by using low bandwidth detectors and A/D convertors. To reconstruct a super-resolution backscatter signal, we designed a deep-learning framework for obtaining LIDAR data with higher resolution. Inspired by the excellent performance of convolutional neural networks (CNN) and residual networks (ResNet) in image classification and image super-resolution. Considering that both image and LIDAR data could be regarded as a binary sequences that a machine could read and process in a manner, we come up with a deep-learning architecture which is specially designed for superresolution full wave-form LIDAR. After adjusting the hyperparameter and training the network, we find that deep-learning method is a feasible and suitable way for super-resolution full-waveform LIDAR.
In subpixel shift super-resolution (SR) imaging, accurate sub-pixel image registration is a key issue. Traditional superresolution reconstruction methods use a motion estimation algorithm to estimate a shift, and then adopt different methods for SR image reconstruct. In this paper, we focus on designing a SR imaging system, in which instead of moving a camera only, the imaging lens before the camera is also moved. By doing so, we reduce the shifting resolution requirement. As the camera with the lens move 13μm, the image moves 1μm. A set of 16 or 9 low-resolution (LR) images of a scene are captured with the system. The sub-pixel shifts between these LR images are 1μm and 2μm, respectively. Then Projection onto Convex Sets (POCS) algorithm is used to reconstruct the SR image. The results show much higher spatial resolution comparing to the LR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.