PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13501, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a snapshot spectral volumetric imaging approach based on light field image slicing and encoding. By slicing and encoding light field information, followed by spectral dispersion and array reimaging lens acquisition of aliased data, a four-dimensional data hypercube is reconstructed using deep learning-based algorithms. This hypercube contains three-dimensional spatial information and one-dimensional spectral information of the scene. The proposed approach utilizes Sanpshot Compressed Imaging Mapping Spectrometer(SCIMS)principle for initial light field spectral data acquisition. Reconstruction of this data employs traditional algorithms like Alternating Direction Method of Multipliers (ADMM) and Generalized Alternating Projection (GAP), as well as deep learning methods such as LRSDN and PnP-DIP. Simulation experiments reveal that classical compressive sensing-based spectral data reconstruction algorithms perform poorly, especially affecting digital refocusing of individual spectral bands in light field images. In contrast, deep learning algorithms exhibit significant improvements, effectively extracting and preserving spatial distribution characteristics of light field data, thus robustly recovering light field information. This validates the effectiveness of the proposed spectral volumetric imaging approach and deep learning-based reconstruction methods. In future research, we will refine the mathematical model, integrate spatial and spectral correlations of light field imaging, develop specialized deep neural network algorithms, and enhance reconstruction of light field spectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image dehazing technology is a hot topic in the fields of image processing and computer vision, aiming to obtain details and texture features of the original scene from foggy images, and then obtain clear and fog free images. Most of the existing research methods are suitable for tasks in low fog scenarios. As the fog concentration increases, the image reconstruction quality of the algorithm significantly decreases, accompanied by detail loss and distortion. In addition, most existing algorithms require a large amount of foggy datasets during model training, and model training takes a long time, which reduces the practicality of the model. In response to the above issues, this paper proposes an image dehazing model based on a small sample multi attention mechanism and multi frequency branch fusion (MFBF-Net). This model can effectively extract high-frequency and low-frequency detail information in the image, and reconstruct the real image to the greatest extent possible. The experimental results show that the dehazing model proposed in this paper exhibits good dehazing performance on small sample datasets, and has good performance in different concentrations of foggy scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The vortex phase-shifting digital holography method, which utilizes a spiral phase plate for phase modulation, can precisely achieve vortex phase modulation while avoiding the cumbersome control and calibration processes of phase shifter. However, this method still requires mechanical rotation of the spiral phase plate to introduce the phase shift. To improve the quality of holographic reconstruction and achieve real-time online imaging, a vortex simultaneous phase-shifting digital holographic microscopy measurement method has been proposed. This method uses a two-dimensional phase grating to split the object beam to generate four diffracted waves of the object being measured. The reference beam is modulated by a spiral phase plate to generate vortex reference light with a spiral phase distribution, which is then divided into four regions. When the four diffracted waves of the object beam match and interfere with the four regions of the vortex reference light, different interference regions will produce different phase-shifted holograms, thereby achieving the collection of simultaneous random phase-shift holograms. Extract four sub-phase-shifted holograms from a single digital hologram, combine them with the vortex phase structure, and calculate the pixel phase shifts in the sub-holograms. Utilize the least squares generalized phase-shifting algorithm for phase reconstruction at each pixel of the hologram, ultimately achieving high-precision measurement of the surface topography of the object being tested. This method avoids the positioning errors and singularity of vortex caused by multiple movements of the phase shifter. It improves the robustness of the imaging device and provides new possibilities for measurements in dynamic scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-pixel imaging (SPI) attracts lots of interest in X-ray, infrared, visible, and terahertz wave bands for the lower cost, wide-band compatibility, optical simplicity, robustness to noise, and fast sampling of single-pixel detection. Self-evolving ghost imaging (SEGI), as a new type of SPI, gives a new perspective by evolving the illumination patterns without postprocessing. Here, we improve the evolving efficiency of SEGI based on the continuity of the target, by applying a median image filter to the illumination patterns generated by the genetic algorithm. This method may help SEGI for practical applications such as real-time imaging, adaptive illumination, or human-robot interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In low-light conditions, high resolution remote sensing cameras usually require a long exposure time, during this process, the relative motion between satellites and ground objects leads to a degradation in imaging quality. Addressing this issue, this paper proposes a low-light staring remote sensing imaging technology based on compressive imaging theory. This technique slices the long exposure process into many short temporal frames, using spatial-variant mask to code each frame. these modulated time frames are collected into compressive measurements that contain spatio-temporal information. Then, a specific algorithm is used to reconstruct short time frames by compressed measurements and masks, finally solve the motion blur problem caused by long exposure, Utilizing this technology, this paper simulated experiments and analyses of factors such as mask form, target motion speed, satellite platform vibration, and encoding time error. The main experimental conclusions are as follows:1. Random binary mask exhibits better reconstruction performance compared to random uniform mask when the number of frames is less than 10; 2.For moving targets, under the pupil radiance is 1×10-2W/(m-2×sr), clear imaging of cars traveling at 80 km/h can be achieved (PSNR exceeding 35 dB); 3.When the maximum platform vibration amplitude is approximately 1/4 of a pixel, The PSNR can reach 40dB, meeting the requirements of high-resolution remote sensing imaging; 4. Introducing encoding time error causes PSNR degradation of about 1 dB, with minimal impact on imaging quality. These research results contribute a certain theoretical basis and engineering reference for the study of weak-light remote sensing imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-pixel imaging (SPI) is a novel technique that captures 2D images using a programmable spatial light modulator (SLM) and a single-pixel detector, instead of conventional 2D array sensors. The image can be reconstructed from the modulation patterns and corresponding 1D bucket measurements. Conventional object detection is performed after a reconstructed image with high fidelity is available. In this paper, an image-free object detection method using the single-pixel measurements is proposed. We designed and trained an end-to-end convolutional neural network to encode and decode a scene for image-free object detection application. The performance of the proposed method is demonstrated using part data from the Voc dataset, which achieves a detection accuracy of 35.41% mAP at a sampling rate below 1.6%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-pixel imaging (SPI) uses the modulated illumination light fields and corresponding single-pixel detection values to reconstruct the image. It provides advantages in remote sensing, low light level detection and other applications. To extend the detection range, fiber laser arrays are used as light source due to their high-power output and rapid refresh rate. In this work, we designed a Fermat spiral fiber laser array with 32 sub-apertures as the illumination light source. There is no spatial periodicity in the normalized second-order intensity correlation function. Therefore, we can get better image quality of SPI compared with regular arrays, like hexagon arrays. Furthermore, we incorporatedLiNbO3modulators into the array for enhanced high-speed phase modulation. We have achieved a random illumination light field modulation frequency of at least 22 kHz.At 64×64 pixel resolution, we achieved a 100 fps frame rate and the corresponding sampling rate is 4.88%. The reconstruction algorithms are Differential Ghost Imaging (DGI) and compressed sensing (Total Variation, TV). The proposed method will greatly improve the imaging speed and illumination power of the SPI. It has great application potential in the field of remote sensing based on SPI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-Line-of-Sight Imaging (NLOS Imaging) is an advanced technique designed to capture images of scenes that are not directly visible to the camera. This method leverages transient sensors to collect time-resolved signals, enabling the reconstruction of hidden scenes. Traditional NLOS imaging techniques rely solely on surface normals rather than depth information to represent implicit surfaces. This limitation results in less accurate refinement of the surface features of reconstructed objects. To address this issue, our research introduces a novel neural implicit learning approach that incorporates depth information for optimization. By integrating depth data, we achieve more precise surface reconstruction in NLOS environments. The process involves extracting depth information through the fusion of albedo plots obtained from different perspectives, which are generated by transient images and optical flow. This combined data enhances the accuracy and quality of the reconstructed images. Additionally, we introduce a depth loss component to facilitate smoother reconstruction of object surfaces while simultaneously constraining the Signed Distance Function (SDF) regression. This dual approach ensures that the reconstructed surfaces are both smooth and accurately defined. Our method has been rigorously tested on both synthetic and real datasets, and the experimental results demonstrate its superiority over existing techniques. Our approach consistently delivers high-quality reconstructions of hidden objects in various scenarios, outperforming current methods in terms of precision and detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a new imaging method, single neutron imaging based on neutron capture events can significantly improve spatial resolution. However, current imaging methods do not fully utilize their spatial resolution performance. This work studies the key processes of particle transport and fluorescence transport in 6LiF-ZnS scintillation screens, proving that spatial resolution still has the potential to be improved. On particle transport, Monte Carlo simulation is used to quantitatively calculate the transport behavior of secondary particles produced by nuclear reactions in scintillation screen. Essential information related to fluorescence, such as the range, magnitude, and shape of the energy distribution, is provided. On fluorescence transport, a theoretical modeling of the transmission of fluorescence photons inside the scintillator was conducted. A new method for calculating point spread functions based on secondary particle transport is proposed. By combining particle transport simulation and fluorescence transport calculation, a more accurate point diffusion function is obtained, which can be used to replace the traditional point light source point diffusion function. This work proves that single neutron imaging based on 6LiF-ZnS scintillation screen for neutron event location is a feasible technical route. The physical factors limiting the imaging method and the credible value of the optimal resolution are given. It is further shown that single neutron imaging based on 6LiF-ZnS scintillation screen still has the potential to improve the resolution, and has the possibility of better than 9.506μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study the sequences pattern effects on a new imaging architecture for unambiguous ranging by combining the advantages of space coding based single-pixel imaging technology and spread spectrum time coding based scanning imaging technology. We firstly derive the time-space united correlation nonlinear detection model based on single photon detection.The depth image is restored by convex optimization inversion algorithm.Then we introduce the arm probability of steady response model to the signal-to-noise ratio model. The relationship of SNR and the 1-bit ratio in bitstream with different dead time by Monte Carlo simulation is studied. The simulation test results show that with the ratio of randomly distributed 1-bits in transmitted sequences pattern increased, the system SNR gets better first and then gets worse. Best pattern of transmitted bit stream according to different dead time leads to the best SNR. Theory model is almost consistent with Monte Carlo simulation. Theoretical model and simulation test both prove that, compared to the conventional space coding based single-pixel imaging technology, this approach enhances scene reconstruction quality with depth accuracy improvements of 9 using 0.18 1-bit ratio sequences with 80ns deadtime. The proposed imaging architecture may provide a new path for improved non-scanning lidar systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem of phase information loss caused by non-uniform transmission of biological slices, we proposed a digital holographic measurement method based on wavelet fusion. We use a synchronous phase-shift digital holographic system to acquire three sets of holograms with different exposure times. Firstly, the holographic image is decomposed into low frequency region and high frequency region by wavelet transform. For the low-frequency region, the principle of maximum phase information is adopted, using the image with the most phase information as the reference. The grayscale values of the holographic images are evaluated and assigned accordingly. For the high-frequency regions, the regional characteristics principle is used. By evaluating the matching degree of any two images, either the maximum value method or the weighted average method is applied. The combined regions then undergo inverse wavelet transform to obtain the fused biological slice holographic image. Finally, phase reconstruction is carried out on the fused holographic image to realize phase enhancement. Experimental results show that for biological slice samples with uneven transmittance, the proposed measurement method increases the information entropy by 4.2% and the contrast by 3.8%comparedtotraditional digital holographic measurement methods. This results in enhanced clarity and information content of the phase images, achieving the goal of phase enhancement and improving measurement quality. The superiority of this method is demonstrated, indicating its broad applicability in digital holographic microscopy measurements of biological slices and other samples with complex transmittance characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shape from Polarization (SFP) is a three-dimensional (3D) reconstruction technique that leverages the polarization properties of light to derive surface morphology. Its capabilities in non-destructive inspection make it particularly valuable in microscopy applications. However, SFP encounters challenges such as normal azimuth ambiguity and depth uncertainty. To address these issues, this paper proposes an optimized scheme that utilizes depth information from Fourier light field microscopy (FLFM) to assist in the reconstruction of SFP. We have developed a dual-optical-path FLFM-polarization microscope system that concurrently captures high-resolution polarization images and Fourier light field images containing spatial-angular information. By employing depth information derived from FLFM, we corrected ambiguous azimuth angles and optimized the SFP results through a variational reconstruction model incorporating depth and projection constraints. Quantitative assessments using mean absolute error (MAE) and structural similarity metrics (SSIM) on FLFM-assisted SFP reconstructions of polystyrene microspheres, validated against atomic force microscopy 3D measurements, confirmed significant enhancements in reconstruction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depending on the measured objects and the specific detection requirements, adjusting the shear ratio appropriately is crucial for optimizing the detection performance of quadriwave lateral shearing interferometric microscopy (QWLSIM). Hence, the shear ratio in QWLSIM should be continuously variable in practical applications. In this paper, phase microscopy using the quadriwave lateral shearing interferometer with continuously variable shear ratio is proposed. We use a globally random encoded hybrid grating (GREHG), which comprises an amplitude grating and a phase chessboard, as the beam splitter in the interferometer. The coding rule of GREHG approximates the ideal grating across the entire grating based on the optical flux constraint, effectively suppressing all higher-order diffraction orders except the necessary ±1 diffraction orders and pixel light, thus achieving a continuously variable shear ratio. As the pixel count of the grating period increases, the adjustment of the shear ratio approaches the ideal condition of continuous adjustment more closely. Simulation and experimental results demonstrate that this approach realizes continuously variable shear ratio without modifying the system, which is of great significance for QWLSIM with diverse measurement requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Mueller Matrix Imaging Polarimeter (MMIP) has attracted widespread attention in biomedical fields, including endoscopy, cancer diagnosis, and characterization of tissue optical clearing. Achieving fast detection and high measurement accuracy is vital for these applications of MMIP. Therefore, an automated data acquisition and processing system is very important for the effective application of MMIP. We have demonstrated the implementation of an MMIP setup using liquid crystal variable phase retarders (LCVRs) and corresponding software programmed in LabVIEW for efficient data acquisition and processing. The measurement results of air show that the LCVRs-MMIP achieves high measurement accuracy (error≤0.17%). Additionally, measurements on polarizers, wave plates, and vortex wave plates further demonstrate the high-performance capabilities of the LCVRs-MMIP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-pixel imaging (SPI) is a rapidly evolving computational imaging technique that reconstructs scenes by correlating modulation patterns with measurements captured by a single-pixel detector. Recent advances suggest that integrating model-driven deep learning can significantly enhance the reconstruction quality and robustness of SPI. However, current model-driven SPI methods often rely on ghost imaging (DGI) with random speckles as network input, requiring deeper reconstruction networks to extract effective features, which increases the computational cost. Additionally, random speckles can cause important image details to be obscured by noise at lower sampling rates, making it challenging for the network to produce satisfactory reconstructions. To overcome these limitations, we propose a model-driven SPI method that utilizes an optimized sorting of the Hadamard matrix, termed Total Change Ascending Order (TCAO), as the modulation mask, coupled with an untrained convolutional neural network (CNN) for reconstruction. TCAO is designed to more effectively extract information from scenes at lower sampling rates. The core innovation is integrating deep learning principles across the entire imaging process, assigning more feature extraction tasks to the modulation stage. We refer to this approach as Deep Learning-Based Single-Pixel Imaging with Efficient Sampling (DLES). Simulation results show that DLES allows the network to focus on enhancing reconstruction performance, yielding superior results at lower and even extremely low sampling rates. This method provides a novel approach to simplifying model-driven neural networks while improving the efficiency and quality of single-pixel imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional optical imaging systems are limited by diffraction and pixel resolution, while computational imaging techniques have overcome these constraints, emerging as an effective means for super-resolution imaging. Laser Reflection Tomography (LRT), as an active detection method, utilizes laser pulses to illuminate targets from multiple angles and reconstructs images based on the Fourier Slice Theorem. The imaging resolution of LRT is determined solely by signal-to-noise ratio, laser pulse width, and detector bandwidth, and is not affected by detection distance or optical aperture. However, when detecting distant non-cooperative targets, the echo signal energy of LRT decays rapidly with increasing distance, significantly reducing the signal-to-noise ratio and sharply degrading image quality, making it difficult to meet the demands of super-resolution imaging. Leveraging the immense potential of single-photon detection for long-range detection and complex scene perception, this paper proposes a long-range LRT imaging algorithm based on single-photon detection echoes, which includes: (1) The photon detection probability waveform is obtained by calculating the number of photons in each time grid at each detection angle.(2) The waveform data under a series of detection angles are converted into reconstructed targets using algorithms such as filtered back projection.(3) Compare the quality of reconstructed images under different target scales and photon numbers, and analyze the factors affecting the reconstruction quality. Simulation experiments have demonstrated that the algorithm achieves image reconstruction based on photon detection echo characterization for the target model, effectively addressing the issue of image quality degradation due to weak echo energy, and holds significant research value and application potential in long-range detection imaging and space exploration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an efficient single-pixel imaging scheme that utilizes a Fermat spiral laser array and an untrained neural network. The Fermat spiral laser array serves as the illuminating light source, generating speckle light fields with nonperiodic spatial correlation properties. By projecting random speckles onto the object, a single-pixel detector captures the light intensities for image reconstruction. We introduce a model-driven untrained neural network (UNN) into the image reconstruction process. This deep learning method eliminates the need for pre-training on datasets and automatically optimizes the reconstructed image. Through experimental demonstration, we validate the superiority of the UNN method over traditional intensity correlation and compressive sensing algorithms in single-pixel imaging schemes based on laser arrays. In particular, the proposed single-pixel imaging (SPI) scheme successfully achieve high-quality image reconstruction for both binary and grayscale objects, even at a sampling ratio as low as 6.25%. Considering the laser array's potential for high emitting power, we believe that the current SPI method opens up avenues for practical applications such as remote sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.