Falcon ODIN is a technology demonstration science payload being designed for delivery to the International Space Station (ISS) in late 2024. Falcon ODIN contains two event based cameras (EBC) and two traditional framing cameras along with mirrors mounted on azimuth elevation rotation stages which allow the field of regard of the EBCs to move. We discuss the mission design and objectives for Falcon ODIN along with ground-based testing of all four cameras.
Event-based vision sensor (EVS) technology has expanded the CMOS image sensor design space of low-SWaP sensors with high-dynamic range operation and ability, under certain conditions, to efficiently capture scene information at a temporal resolution beyond that achievable by a typical sensor operating near a 1 kHz frame rate. Fundamental differences between EVS and framing sensors necessitate development of new characterization techniques and sensor models to evaluate hardware performance and the camera architecture trade-space. Laboratory characterization techniques reported previously include noise level as a function of static scene light level (background activity), contrast responses referred to as S-curves, refractory period characterization using the mean minimum interspike interval, and a novel approach to pixel bandwidth measurement using a static scene. Here we present pre-launch characterization results for the two Falcon ODIN (Optical Defense and Intelligence through Neuromorphics) event-based cameras (EBCs) scheduled for launch to the International Space Station (ISS). Falcon ODIN is a follow-on experiment to Falcon Neuro previously installed and operated onboard the ISS. Our characterization of the two ODIN EBCs includes high-dynamic range background activity, contrast response S-curves, and low-light cutoff measurements. Separately, we report evaluation of the IMX636 sensor functionality get_illumination which gives an auxiliary measurement of on-chip illuminance (irradiance) and can provide high dynamic range sensing of sky brightness (background light level).
Event-based camera (EBC) technology provides high-dynamic range operation and shows promise for efficient capture of spatio-temporal information, producing a sparse data stream and enabling consideration of nontraditional data processing solutions (e.g., new algorithms, neuromorphic processors, etc.). Given the fundamental difference in camera architecture, the EBC response and noise behavior differ considerably compared to standard CCD/CMOS framing sensors. These differences necessitate the development of new characterization techniques and sensor models to evaluate hardware performance and elucidate the trade-space between the two camera architectures. Laboratory characterization techniques reported previously include noise level as a function of static scene light level (background activity) and contrast responses referred to as S-curves. Here we present further progress on development of basic characterization methods and test capabilities for commercial-off-the-shelf (COTS) visible EBCs, with a focus on measurement of pixel deadtime (refractory period) including results for the 4th-generation sensor from Prophesee and Sony. Refractory period is empirically determined from analysis of the interspike intervals (ISIs), and results visualized using log-histograms of the minimum per-pixel ISI values for a subset of pixels activated by a controlled dynamic scene. Our tests of the Prophesee gen4 EVKv2 yield refractory period estimates ranging from 6.1 msec to 6.8 μsec going from the slowest (20) to fastest (100) settings of the relevant bias parameter, bias_refr. We also introduce and demonstrate the concept of pixel bandwidth measurement from data captured while viewing a static scene – based on recording data at a range of refractory period setting and then analyzing noise-event statistics. Finally, we present initial results for estimating and correcting EBC clock drift using a GPS PPS signal to generate special timing events in the event-list data streams generated by the DAVIS346 and DVXplorer EBCs from iniVation.
Neuromorphic cameras are capable of processing large amounts of information through asynchronously recording changes in photon levels across every pixel. Due to their recent insertion into the commercial market, research that characterizes these types of cameras is just emerging. Determining sensor capabilities outside a laboratory environment allows for understanding future applications of this technology. An experiment was made with the purpose of determining if the camera could detect laser scatter within the atmosphere and determine information about the laser. Experimentation in real-world environments observed that the camera can distinguish laser scatter with environmental backdrops at varying distances, determine the repetition frequency of the laser, and draw preliminary angle determination data.
Neuromorphic cameras, or Event-based Vision Sensors (EVS), operate in a fundamentally different way than conventional frame-based cameras. Their unique operational paradigm results in a sparse stream of high temporal resolution output events which encode pixel-level brightness changes with low-latency and wide dynamic range. Recently, interest has grown in exploiting these capabilities for scientific studies; however, accurately reconstructing signals from the output event stream presents a challenge due to physical limitations of the analog circuits that implement logarithmic change detection. In this paper, we present simultaneous recordings of lightning strikes using both an event camera and frame-based high-speed camera. To our knowledge, this is the first side-by-side recording using these two sensor types in a real-world scene with challenging dynamics that include very fast and bright illumination changes. Our goal in this work is to accurately map the illumination to EVS output in order to better inform modeling and reconstruction of events from a real-scene. We first combine lab measurements of key performance metrics to inform an existing pixel model. We then use the high-speed frames as signal ground truth to simulate an event stream and refine parameter estimates to optimally match the event-based sensor response for several dozen pixels representing different regions of the scene. These results will be used to predict sensor response and develop methods to more precisely reconstruct lightning and sprite signals for Falcon ODIN, our upcoming International Space Station neuromorphic sensing mission.
KEYWORDS: Cameras, Sensors, Optical engineering, Field programmable gate arrays, Data acquisition, Space operations, Linear filtering, Imaging systems, Physics, Staring arrays
We report on the Falcon neuro event-based sensor (EBS) instrument that is designed to acquire data from lightning and sprite phenomena and is currently operating on the International Space Station. The instrument consists of two independent, identical EBS cameras pointing in two fixed directions, toward the nominal forward direction of flight and toward the nominal Nadir direction. The payload employs stock DAVIS 240C focal plane arrays along with custom-built control and readout electronics to remotely interface with the cameras. To predict the sensor’s ability to effectively record sprites and lightning, we explore temporal response characteristics of the DAVIS 240C and use lab measurements along with reported limitations to model the expected response to a characteristic sprite illumination time-series. These simulations indicate that with appropriate camera settings the instrument will be capable of capturing these transient luminous events when they occur. Finally, we include initial results from the instrument, representing the first reported EBS recordings successfully collected aboard a space-based platform and demonstrating proof of concept that a neuromorphic camera is capable of operating in the space environment.
A collaborative research effort between the Johns Hopkins Applied Physics Laboratory (JHU/APL) and the Space Physics and the Atmospheric Research Center (SPARC), located at the United States Air Force Academy (USAFA), has resulted in the development of a suite of miniaturized electrostatic analyzers (ESAs). This research effort is currently on the development of fifth generation charged particle detectors which include the Flat Plasma Spectrometer (FlaPS) and Canary, which have flight heritage, and the Energetic Electrostatic Analyzer (EESA) which is currently at the laboratory prototype stage. The implementation of microelectromechanical systems (MEMS) technology to fabricate the silicon wafer sensor heads used in the ESA design has enabled the development of plasma spectrometers that are low mass and consume little power, while still maintaining the performance capability of current state-of-the art instruments. This shift towards the aggressive miniaturization of charged particle detectors, coupled with the increasing capability of ever smaller satellite vehicles, has led to an increase in science mission opportunities. Through the course of this research program the cadets working at the SPARC have been engaged across a broad range of academic disciplines such as physics, computer science, astronautics, electrical and mechanical engineering. This program has allowed undergraduate students to participate in advanced research as they develop into the next generation of scientists and engineers.
The US Air Force Academy of Physics has built FalconSAT-7, a membrane solar telescope to be deployed from a
3U CubeSat in LEO. The primary optic is a 0.2m photon sieve – a diffractive element consisting of billions of tiny
circular dimples etched into a Kapton sheet. The membrane its support structure, secondary optics, two imaging
cameras and associated control, recording electronics are packaged within half the CubeSat volume. Once in space
the supporting pantograph structure is deployed, extending out and pulling the membrane flat under tension. The
telescope will then be directed at the Sun to gather images at H-alpha for transmission to the ground. We will
present details of the optical configuration, operation and performance of the flight telescope which has been made
ready for launch in early 2017.
FalconSAT-7 (FS-7), a 3U CubeSat solar telescope, is the first-ever on-orbit demonstration of a lightweight deployable membrane primary optic that is twice the size of the host spacecraft. The telescope payload consists of the deployment structure, optical, electronic subsystems and occupying 1.5 U, while the rest of the volume is used for the bus, including satellite power, control, communications with the ground, etc. The deployment subsystem provides membrane deployment, positioning and tension with high precision for proper imaging, while the optical subsystem includes secondary optics with a camera to record images of the Sun at H-alpha. The electronics subsystem is used to control the primary optics deployment, focusing, image storage and transfer to the bus etc. We conducted an end-to-end flight optical subsystem test and a series of tests of the corrosion of the photon sieve due to atomic oxygen. The flight model build will be completed by October 2015with a launch date set for September 2016.
We describe imaging capabilities of a 0.2 m membrane diffractive primary (DOE) used as a key element in FalconSat-7, a space-based solar telescope. Its mission is to take an image of the Sun at the H-alpha wavelength (656nm) over a narrow bandwidth while in orbit. In this case the DOE is a photon sieve which consists of billions of tiny holes, with the focusing ability dependent on an underlying Fresnel zone geometry. Uniform radial expansion/contraction of the substrate due to temperature or relative humidity change will result in a shift in focal length without introducing errors in phase of the transmitted wavefront and without a decrease in efficiency. We will also show that while ideally the DOE surface should be held flat to within 5.25 microns, an opto-mechanical analysis showed that local deformations up to 32 microns are possible without significantly degrading the image quality.
This paper focuses on recent progress in designing FallconSAT-7, a 33U CubeSat solar telescope designed to image the Sun from low Earth orbit. The telescope system includes a deployable structure that supports a membrane photon sieve under tension as well as secondary optics. To satisfy mission requirements to demonstrate diffraction limited imaging capability of this collapsible, f/2 diffractive primary we have completed studying a number off effects on membrane material that can affect system imaging quality.
We are currently constructing FalconSAT-7 for launch in mid-2014. The low-Earth, 3U CubeSat solar telescope
incorporates a 0.2m deployable membrane photon sieve with over 2.5 billion holes. The aim of the experiment is to
demonstrate diffraction limited imaging of a collapsible, diffractive primary over a narrow bandwidth. As well as being
simpler to manufacture and deploy than curved, polished surfaces, the sheets do not have to be optically flat, greatly
reducing many engineering issues. As such, the technology is particularly promising as a means to achieve extremely
large optical primaries from compact, lightweight packages.
We are currently constructing FalconSAT-7 for launch in late 2013. The low-Earth, 3U CubeSat solar telescope
incorporates a 0.2m deployable membrane photon sieve with over 2.5 billion holes. The aim of the experiment is to
demonstrate diffraction limited imaging of a collapsible, diffractive primary over a narrow bandwidth. As well as being
simpler to manufacture and deploy than curved, polished surfaces, the sheets do not have to be optically flat, greatly
reducing many engineering issues. As such, the technology is particularly promising as a means to achieve extremely
large optical primaries from compact, lightweight packages.
We have developed diffractive primaries in flat membranes for space-based imagery. They are an attractive approach in
that they are much simple to fabricate, launch and deploy compared to conventional three-dimensional optical structures.
In this talk we highlight the design of a photon sieve which consists of a large number of holes in an otherwise opaque
substrate. We present both theoretical and experimental results from small-scale prototypes and key solutions to issues of
limited bandwidth and efficiency that have been addressed. Our current efforts are being directed towards an on-orbit
0.2m solar observatory demonstration deployed from a 3U CubeSat bus.
The Canary instrument is a miniature electrostatic analyzer designed to detect positively charged ions in the energy range 0-1500 eV. The Canary concept began with the development of a Micro-Electro-Mechanical (MEMS) Flat Plasma Spectrometer (FlaPS), which, integrated with electronics onto FalconSAT-3, reduced the size and mass of an ion plasma spectrometer to about 10x10x10 cm3 and 250 g. The successor to FlaPS was the Wafer Integrated Spectrometer (WISPERS), expanding the same instrument to seven sensors all with uniquely optimized energy ranges and azimuth/elevation look angles. WISPERS is due to fly on the USAF Academy's FalconSAT-5 satellite scheduled for launch in Spring 2010. FlaPS and WISPERS created a paradigm shift in the use of such instruments in a highly capable but small, low power package. The third generation, Canary (named after the "canary in the coal mine"
- an earlier technology used to provide low-cost, effective warning of danger to operators), will be flown on the International Space
Station (ISS) and used to investigate the interaction of approaching spacecraft with the background plasma environment around the ISS.
We describe a hyper-spectral measurement technique for sprite observations which takes data at 25,000 samples per second in 32 individual color bands. The high speed hyper-spectral design is based around a 32 channel multi anode photometer (MAP) viewing a dispersing grating which is holographically inscribed on a spherical focusing mirror. Design and operating characteristics of the device are presented. The high speed hyper-spectral instrument will be used to observe spectra from transient luminous events called sprites seen above meso-scale thunderstorms. Sprites are seen to occur at altitudes of 40-90 km, and last a few to tens of milliseconds in duration. High speed spectral measurements may give some indication of the energetic processes underlying sprite formation. We are particularly interested in the overall energy budget associated with sprites in large meso-scale thunderstorm complexes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.