Event-based camera (EBC) technology provides high-dynamic range operation and shows promise for efficient capture of spatio-temporal information, producing a sparse data stream and enabling consideration of nontraditional data processing solutions (e.g., new algorithms, neuromorphic processors, etc.). Given the fundamental difference in camera architecture, the EBC response and noise behavior differ considerably compared to standard CCD/CMOS framing sensors. These differences necessitate the development of new characterization techniques and sensor models to evaluate hardware performance and elucidate the trade-space between the two camera architectures. Laboratory characterization techniques reported previously include noise level as a function of static scene light level (background activity) and contrast responses referred to as S-curves. Here we present further progress on development of basic characterization methods and test capabilities for commercial-off-the-shelf (COTS) visible EBCs, with a focus on measurement of pixel deadtime (refractory period) including results for the 4th-generation sensor from Prophesee and Sony. Refractory period is empirically determined from analysis of the interspike intervals (ISIs), and results visualized using log-histograms of the minimum per-pixel ISI values for a subset of pixels activated by a controlled dynamic scene. Our tests of the Prophesee gen4 EVKv2 yield refractory period estimates ranging from 6.1 msec to 6.8 μsec going from the slowest (20) to fastest (100) settings of the relevant bias parameter, bias_refr. We also introduce and demonstrate the concept of pixel bandwidth measurement from data captured while viewing a static scene – based on recording data at a range of refractory period setting and then analyzing noise-event statistics. Finally, we present initial results for estimating and correcting EBC clock drift using a GPS PPS signal to generate special timing events in the event-list data streams generated by the DAVIS346 and DVXplorer EBCs from iniVation.
Neuromorphic cameras, or Event-based Vision Sensors (EVS), operate in a fundamentally different way than conventional frame-based cameras. Their unique operational paradigm results in a sparse stream of high temporal resolution output events which encode pixel-level brightness changes with low-latency and wide dynamic range. Recently, interest has grown in exploiting these capabilities for scientific studies; however, accurately reconstructing signals from the output event stream presents a challenge due to physical limitations of the analog circuits that implement logarithmic change detection. In this paper, we present simultaneous recordings of lightning strikes using both an event camera and frame-based high-speed camera. To our knowledge, this is the first side-by-side recording using these two sensor types in a real-world scene with challenging dynamics that include very fast and bright illumination changes. Our goal in this work is to accurately map the illumination to EVS output in order to better inform modeling and reconstruction of events from a real-scene. We first combine lab measurements of key performance metrics to inform an existing pixel model. We then use the high-speed frames as signal ground truth to simulate an event stream and refine parameter estimates to optimally match the event-based sensor response for several dozen pixels representing different regions of the scene. These results will be used to predict sensor response and develop methods to more precisely reconstruct lightning and sprite signals for Falcon ODIN, our upcoming International Space Station neuromorphic sensing mission.
Imaging through deep turbulence is a hard and unsolved problem. There have been recent advances toward sensing and correcting moderate turbulence using digital holography (DH). With DH, we use optical heterodyne detection to sense the amplitude and phase of the light reflected from an object. This phase information allows us to digitally back propagate the measured field to estimate and correct distributed-volume aberrations. Recently, we developed a model-based iterative reconstruction (MBIR) algorithm for sensing and correcting atmospheric turbulence using multi-shot DH data (i.e., multiple holographic measurements). Using simulation, we showed the ability to correct deep-turbulence effects, loosely characterized by Rytov numbers greater than 0.75 and isoplanatic angles near the diffraction limited viewing angle. In this work, we demonstrate the validity of our method using laboratory measurements. Our experiments utilized a combination of multiple calibrated Kolmogorov phase screens along the propagation path to emulate distributed-volume turbulence. This controlled laboratory setup allowed us to demonstrate our algorithm’s performance in deep turbulence conditions using real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.