We present a novel deep learning-based framework for event-based Shack-Hartmann wavefront sensing. This approach leverages a convolutional neural network (CNN) to directly reconstruct high-resolution wavefronts from event-based sensor data. Traditional wavefront sensors, such as the Shack-Hartmann sensor, face challenges such as measurement artifacts and limited bandwidth. By integrating event-based cameras—which offer high temporal resolution and data efficiency—with CNN-based reconstruction—which can learn strong spatiotemporal priors— our method addresses these limitations while simultaneously improving the quality of reconstruction. We evaluate our framework on simulated high-speed turbulence data, demonstrating a 73% improvement in reconstruction fidelity compared to existing methods. Additionally, our framework is capable of predictive wavefront sensing to reduce compensation latency and increase overall system bandwidth.
In coherent imaging systems like SAR and digital holography, speckle noise is effectively mitigated using the multilook or multishot approach. Utilizing maximum likelihood estimation (MLE), we recently theoretically and algorithmically showed the feasibility of effectively recovering a signal from multilook measurements, even when each look is severely under-determined . Our method leverages the "Deep Image Prior (DIP) hypothesis," which posits that images can be effectively represented within untrained neural networks with fewer parameters than the total pixel count, using i.i.d. noises as inputs. We also developed a computationally efficient algorithm inspired by projected gradient descent to solve the MLE optimization, incorporating a model bagged-DIP concept for the projection step. This paper explores the method's applicability to deblurring in coherent imaging, where the forward model involves a blurring kernel amidst speckle noise—a significant challenge with broad applications. We introduce a novel iterative algorithm to address these issues, enabling multi-look deblurring without simplifying or approximating the MLE cost function.
Speckle noise is inherent to coherent imaging systems such as synthetic aperture radar (SAR), optical coherence tomography (OCT), and ultrasound imaging. However, its multiplicative nature makes it especially challenging to remove. Today the most effective speckle denoising methods average multiple identically distributed measurements—however, these approaches fail to reconstruct dynamic scenes. In this work we leverage implicit neural representations (INRs) to perform unsupervised speckle denoising of time-varying sequences. We optimize a maximum likelihood-based loss function to produce high-fidelity, speckle-free reconstructions. Our approach significantly outperforms existing techniques, achieving up to a 4dB improvement in peak signal-to-noise ratio (PSNR) for dynamic scenes with simulated speckle.
Fourier ptychographic microscopy (FPM) has its strength in tackling the trade-off between resolution and field-of-view of imaging systems by computational methods. Here, we present a time-efficient and physics-based algorithm for FPM image stack reconstruction using implicit neural representation and tensor low-rank approximation. The method is free of any pre-training process and can be easily adapted to various computational microscopes. Compared to the conventional FPM methods for image stack reconstruction, the proposed method can be several times faster than conventional FPM methods on the same graphics processing units (GPU) and significantly reduce data volume for storage. The proposed method has potential applications in digital pathology and its downstream data-driven tasks, and can be beneficial to data collaboration in biological sciences.
This talk describes our recently-developed guidestar-free approach to imaging through scattering and other optical aberrations; neural wavefront shaping (NeuWS). NeuWS integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations.
Adversarial sensing is a self-supervised, learning-based approach for solving inverse problems with stochastic forward models. The basic idea behind adversarial sensing is that one can use a discriminator to compare the distributions of predicted and observed measurements. The feedback from the discriminator thus allows one to reconstruct a signal from observations from stochastic forward models without solving for any the forward model’s unknown latent variables. While adversarial sensing requires no training data, it can be modified to incorporate pretrained deep generative models for use as priors. This paper highlights some of our recent work on applying adversarial sensing to imaging through turbulence and to long-range sub-diffraction limited imaging with Fourier ptychography. For a longer and more detailed discussion of our methods please see.1
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.