KEYWORDS: Sensors, Navigation systems, Safety, LIDAR, Operating systems, Open source software, Motion models, Monte Carlo methods, Robotics, Global Positioning System
Pavements used for construction and repair of airfield surfaces must be rigorously tested before use in the field. This testing is typically accomplished by trafficking simulated aircraft landing gear payloads across an experimental pavement test patch and measuring deflection, cracking, and other effects on the pavement and aggregate subbase. The landing gear payload is heavily weighted to simulate the pressures of landing and taxiing, and a large tractor pulls the landing gear repeatedly over the test patch while executing an intricate trafficking pattern to distribute the load on the patch in the desired manner. In conventional testing, a human drives the test vehicle, called a load cart, forward and backward over the experimental patch up to about 1000 times while carefully following a set of closely spaced lane markings. This is a dull job that is ripe for automation. One year ago, at this conference, we presented results of kitting the load cart, consisting of the tractor from a Caterpillar 621G scraper and a custom trailer carrying the landing gear simulacrum, with a custom vehicle interface and bringing it under tele-operation. In this paper, we describe the results of fully automating the load cart pavements test vehicle using the Robot Operating System 2 Navigation Stack. The solutions works without GPS, line following, or external tracking systems and involves minimal modifications to the vehicle. Using lidar and Adaptive Monte Carlo localization, the team achieved better than 6" cross-track accuracy with a lumbering, 300,000-pound vehicle.
The capability to rapidly augment airbases with bio-concrete infrastructure to support parking, loading, unloading, rearming, and refueling operations is of interest to the Air Force, because it requires transportation of fewer raw materials to remote sites. Automation of the bio-cement delivery further reduces logistical requirements and mitigates hazards to personnel, especially in contested or austere environments. In this paper we discuss the full-stack development and integration of a robotic applique for a commercial tractor and present the test results for autonomous delivery of bio-cement bacteria, feed stock, and water for stabilization of a sandy test area. The tractor autonomously navigates, sprays, and avoids obstacles using robust and economical off-the-shelf components and software. For this first phase of the project, we employ GNSS for localization and automotive lidar for obstacle detection. We report on the design of the robotic applique, including the mechanical, electrical, and software components, which are mostly commercial-off-the-shelf or open source. We discuss the results of testing and calibration including tests of towing capacity, calibration of steering, measurement of liquid spray distribution, measurement of tracking errors, and determination of repeatability of positioning for refilling of the reservoir. We also examine higher order behaviors and chart a path forward for future development, which includes GNSS-denied navigation.
The Air Force’s Rapid Airfield Damage Assessment (RADA) process was conceived as a means of evaluating airfield pavement assets after attacks to inform subsequent threat mitigation and repair efforts. The classification and geolocation of small objects of interest (< 7.5cm), like unexploded ordnance, is a critical component of this assessment process. In its original form, RADA was conducted manually, exposing teams of service members to dangerous and unknown conditions for hours at a time. In an effort to both expedite and remotely automate this critical task, researchers are developing small Uncrewed Aerial Systems (sUAS) equipped with various sensor payloads to perform object detection across the compromised airfield environment. Hyperspectral imaging has been specifically targeted as a promising sensor solution due to its enhanced discriminatory power in classifying materials. This study is focused on understanding how measurements of these small objects are affected by changes in parameters that govern operation of the drone-sensor system. Radiometric precision and spatial resolution are evaluated with respect to changes in flight speed, altitude, shutter speed, gain, and frames per second, in realistic field conditions. Within the ranges evaluated for each system parameter, the drone-sensor system presented spectrally and spatially resolves objects captured by just a few pixels with sufficient accuracy and precision for the RADA application.
If a U.S. Air Force operated airfield is attacked, the current methodology for assessing its condition is a slow manual inspection process, exposing personnel to dangerous conditions. Advances in drone technology, remote sensing, deep learning, and computer vision have sparked interest in developing autonomous remote solutions. While digital image processing techniques have matured in recent decades, a lack of application-specific training data presents significant obstacles for developing reliable solutions to detect specific objects amongst rubble, debris, variations in pavement types, changing surface features, and other variable runway conditions. Consequently, near-surface hyperspectral imaging has been proposed as an alternative to RGB digital images, due to its discriminatory power in classifying materials. Spatio-spectral data acquired by hyperspectral imagers help address common challenges presented by data scarcity and scene complexity; however, raw data acquired by hyperspectral sensors must first undergo a reflectance correction process before it can be of use. This paper presents an expedient method, tailored to airfield damage assessment, for performing autonomous reflectance correction on near-surface hyperspectral data using in-scene pavement materials with a known spectral reflectance. Unlike most reflectance correction methods, this process eliminates the need for human intervention with the sensor (or its data) pre or post flight and does not require pre-staged reference targets or an additional downwelling irradiance sensor. Positive initial results from real-world flights over pavements are presented and compared to traditional methods of reflectance correction. Three separate flight tests report mean errors between 2% and 2.5% using the new method.
When fielding near-surface hyperspectral imaging systems for computer vision applications, raw data from a sensor are often corrected to reflectance before analysis. This research presents an expedient and flexible methodology for performing spectral reflectance estimation using in situ asphalt cement concrete or Portland cement concrete pavement as a reference material. Then, to evaluate this reflectance estimation method’s utility for computer vision applications, four datasets are generated to train machine learning models for material classification: (1) a raw signal dataset, (2) a normalized dataset, (3) a reflectance dataset corrected with a standard reference material (polytetrafluoroethylene), and (4) a reflectance dataset corrected with a pavement reference material. Various machine learning algorithms are trained on each of the four datasets and all converge to excellent training accuracy (>94 % ). Models trained on the raw or normalized signals, however, did not exceed 70% accuracy when tested against new data captured under different illumination conditions, while models trained using either reflectance dataset saw almost no drop between training and testing accuracy. These results quantify the importance of reflectance correction in machine learning workflows using hyperspectral data, while also confirming practical viability of the proposed reflectance correction method for computer vision applications.
The Air Force Civil Engineer Center’s C-17 Load Cart is a large, 150-ton machine based on a modified Caterpillar 621G scraper for testing experimental pavements used in airfield surface construction and repair. Long lasting, durable, preparein- place, minimally resourced pavements represent a critical technology for airfield damage repair, especially in expeditionary settings, and formulations must be tested using realistic loads but without the expense and logistical challenges of using real aircraft. The Load Cart is an articulated vehicle consisting of the 621G tractor and a custom trailer carrying a weighted set of landing gear to simulate the loads exerted during aircraft landing and taxiing. During the test a human driver repetitively traffics the vehicle hundreds of times over an experimental patch of pavement, following an intricate trafficking pattern, to evaluate wear and mechanical properties of the pavement formulation. The job of driving the Load Cart is dull, repetitive, and prone to errors and systematic variation depending on the individual driver. This paper describes the full-stack development of an autonomy kit for the Load Cart, to enable repeatable testing without a driver. Open-source code (Robot Operating System), commercial-off-the-shelf sensors, and a modular design based on open standards are exploited to achieve autonomous operation without the use of GNSS (which is challenged by operation inside a metal test building). The Vehicle Control Unit is a custom interface in PC-104 form factor allowing actuation of the Load Cart via CAN J1939. Operational modes include manual, tele-operation, and autonomous.
We present results from testing a multi-modal sensor system (consisting of a camera, LiDAR, and positioning system) for real-time object detection and geolocation. The system’s eventual purpose is to assess damage and detect foreign objects on a disrupted airfield surface to reestablish a minimum airfield operating surface. It uses an AI to detect objects and generate bounding boxes or segmentation masks in data acquired with a high-resolution area scan camera. It locates the detections in the local, sensor-centric coordinate system in real time using returns from a low-cost commercial LiDAR system. This is accomplished via an intrinsic camera calibration together with a 3D extrinsic calibration of the camera- LiDAR pair. A coordinate transform service uses data from a navigation system (comprising an inertial measurement unit and global positioning system) to transform local coordinates of the detections obtained with the AI and calibrated sensor pair into earth-centered coordinates. The entire sensor system is mounted on a pan-tilt unit to achieve 360-degree perception. All data acquisition and computation are performed on a low SWAP-C system-on-module that includes an integrated GPU. Computer vision code runs real-time on the GPU and has been accelerated using CUDA. We have chosen Robot Operating System (ROS1 at present but porting to ROS2 in the near term) as the control framework for the system. All computer vision, motion, and transform services are configured as ROS nodes.
Object detection AI’s enable robust solutions for fast, automated detection of anomalies in operating environments such as airfields. Implementation of AI solutions requires training models on a large and diverse corpus of representative training data. To reliably detect craters and other damage on airfields, the AI must be trained on a large, varied, and realistic set of images of craters and other damage. The current method for obtaining this training data is to set explosives in the concrete surface of a test airfield to create actual damage and to record images of this real data. This approach is extremely expensive and time consuming, results in relatively little data representing just a few damage cases and does not adequately represent damage to UXO and other artifacts that are detected. To address this paucity of training data, we have begun development of a training data generation and labeling pipeline that leverages Unreal Engine 4 to create realistic synthetic environments populated with realistic damage and artifacts. We have also developed a labeling system for automatic labeling of the detection segments in synthetic training images, in order to provide relief from the tedious and time-consuming process of manually labeling segments in training data and eliminate human errors incurred by manual labeling. We present comparisons of performance of two object detection AI’s trained on real and synthetic data and discuss cost and schedule savings enabled by the automated labeling system used for labeling of detection segments.
KEYWORDS: Clouds, LIDAR, Detection and tracking algorithms, Data modeling, Algorithm development, Data acquisition, Sensors, Visualization, Object recognition, C++
The ability to rapidly assess damage to military infrastructure after an attack is the object of ongoing research. In the case
of runways, sensor systems capable of detecting and locating craters, spall, unexploded ordinance, and debris are necessary
to quickly and efficiently deploy assets to restore a minimum airfield operating surface. We describe measurements
performed using two commercial, robotic scanning LiDAR systems during a round of testing at an airfield. The LiDARs
were used to acquire baseline data and to conduct scans after two rounds of demolition and placement of artifacts for the
entire runway. Configuration of the LiDAR systems was sub-optimal due to availability of only two platforms for
placement of sensors on the same side of the runway. Nevertheless, results prove that the spatial resolution, accuracy, and
cadence of the sensors is sufficient to develop point cloud representations of the runway sufficient to distinguish craters,
debris and most UXO. Location of a complementary set of sensors on the opposite side of the runway would alleviate the
observed shadowing, increase the density of the registered point cloud, and likely allow detection of smaller artifacts.
Importantly, the synoptic data acquired by these static LiDAR sensors is dense enough to allow registration (fusion) with
the smaller, denser, targeted point cloud data acquired at close range by unmanned aerial systems. The paper will also
discuss point cloud manipulation and 3D object recognition algorithms that the team is developing for automatic detection
and geolocation of damage and objects of interest.
KEYWORDS: Optical filters, Sensors, Polarizers, Imaging systems, Signal to noise ratio, Staring arrays, Code v, RGB color model, Modeling and simulation, Linear polarizers
A new, economical, lenslet-array-based imaging sensor design is proposed, simulated, and analyzed. In this investigation a bare lenslet array model is first developed in Code V®. The results show that, as expected, intolerable optical cross-talk is present in this simple system. This problem has been addressed in previous systems via the inclusion of a physical image separation layer. The alternative system proposed here to alleviate crosstalk involves the introduction of both polarizers and spectral filters. As a consequence this simple system design also provides spectro-polarimetric resolution. Simulations were developed in order to analyze the system performance of two designs. The simulation results were analyzed in terms of a measure of signal-to-noise ratio (SNR) and in terms of an en-squared energy that includes all subimages. The results show that a design employing only a few spectral filters suppresses crosstalk for objects of small angular extent but does not suppress crosstalk to a tolerable level for 2π steradian illumination, as evidenced by SNR less than one. However, the inclusion of more spectral filters results in a spectro-polarimetric thin imager design that suppresses crosstalk and provides finer spectral resolution without the inclusion of a signal separation layer.
The Marshall Grazing Incidence X-ray Spectrograph (MaGIXS) is a proposed sounding rocket experiment designed to observe
spatially resolved soft X-ray spectra of the solar corona for the first time. The instrument is a purely grazing-incidence
design, consisting of aWolter Type-1 sector telescope and a slit spectrograph. The telescope mirror is a monolithic Zerodur
mirror with both the parabolic and hyperbolic surfaces. The spectrograph comprises a pair of paraboloid mirrors acting as
a collimator and reimaging mirror, and a planar varied-line-space grating, with reflective surfaces operate at a graze angle
of 2 degrees. This produces a flat spectrum on a detector covering a wavelength range of 6-24Å (0.5-1.2 keV). The design
achieves 20 mÅ spectral resolution (10 mÅ /pixel) and 5 arcsec spatial resolution (2.5 arcsec / pixel) over an 8-arcminute
long slit. The spectrograph is currently being fabricated as a laboratory prototype. A flight candidate telescope mirror is
also under development.
This paper will describe a new Extreme Ultraviolet (EUV) test facility that is being developed at the Marshall Space
Flight Center (MSFC) to test EUV telescopes. Two flight programs, Hi-C, the high resolution coronal imager (a
sounding rocket program), and SUVI, the Solar Ultraviolet Imager (GOES-R), set the requirements for this new facility.
This paper will discuss those requirements, the EUV source characteristics, the wavelength resolution that is expected
and the vacuum chambers (Stray Light Facility, Xray Calibration Facility and the NSSTC EUV test chamber) where this
facility will be used.
KEYWORDS: Fabry–Perot interferometers, Calibration, Space telescopes, Data modeling, Transmittance, Telescopes, Spectral calibration, Solar processes, Helium neon lasers, Imaging systems
We present the methods and results for the figure testing and spectral calibration of the narrow- and wide-band etalons
for the Improved Solar Observing Optical Network's dual-etalon tunable imaging filters. The ISOON system comprises a
distributed network of ground-based patrol telescopes that gather full-disk data for the monitoring of solar activity and
for the development of more reliable space weather models. The etalon figure testing consists mainly of testing the
cavity flatness and coating uniformity of each etalon. For this testing a series of exposures is taken as the etalon is tuned
through a stable spectral line and a full-aperture line profile correlation method is employed to map the variations in the
effective cavity thickness. Calibration of the etalons includes absolute calibration of the cavity mean spacing change
corresponding to a controller step and calibration of plate parallelism and spacing settings for each spectral region of
interest. Developmental acceptance testing and calibration procedures were performed in a laboratory environment using
a HeNe laser source. A calibration method that uses illumination in the telluric lines is also described. This latter method
could be used to conduct calibration in the field without the use of an artificial light source.
The solar chromosphere is an important boundary, through which all of the plasma, magnetic fields and energy in the
corona and solar wind are supplied. Since the Zeeman splitting is typically smaller than the Doppler line broadening in
the chromosphere and transition region, it is not effective to explore weak magnetic fields. However, this is not the case
for the Hanle effect, when we have an instrument with high polarization sensitivity (~ 0.1%). "Chromospheric Lyman-
Alpha SpectroPolarimeter (CLASP)" is the sounding rocket experiment to detect linear polarization produced by the
Hanle effect in Lyman-alpha line (121.567 nm) and to make the first direct measurement of magnetic fields in the upper
chromosphere and lower transition region. To achieve the high sensitivity of ~ 0.1% within a rocket flight (5 minutes) in
Lyman-alpha line, which is easily absorbed by materials, we design the optical system mainly with reflections. The
CLASP consists of a classical Cassegrain telescope, a polarimeter and a spectrometer. The polarimeter consists of a
rotating 1/2-wave plate and two reflecting polarization analyzers. One of the analyzer also works as a polarization beam
splitter to give us two orthogonal linear polarizations simultaneously. The CLASP is planned to be launched in 2014
summer.
We use the two-dimensional Chebyshev polynomials as the basis for decomposition of test data over rectangular apertures, particularly for anamorphic optics. This includes simple optics such as cylindrical lenses and mirrors as well as complex optics, such as aspheric cylindrical optics. The new basis set is strictly orthogonal over rectangles of arbitrary aspect ratio and they correspond well with the aberrations of systems containing such type of optics. An example is given that applies the new basis set to study the surface figure error of a cylindrical Schmidt corrector plate. It is not only an excellent fitting basis but also can be used to flag misalignment errors that are critical to fabrication.
We describe an evolutionary algorithm for the design of an imaging triple-étalon Fabry-Perót interferometer (MFPI), which gives a solution to the multidimensional minimization process through a stochastic search method. The interactions between design variables (the étalon reflectances, interétalon ghost attenuator transmittances, and spacing ratios) are complex, resulting in a fitness landscape that is pitted with local optima. Traditional least-squares and gradient descent algorithms are not useful in such a situation. Instead, we employ a method called evolution strategies in which several preliminary designs are randomly generated subject to constraints. These designs are combined in pairs to produce offspring designs. The offspring population is mutated randomly, and only the fittest designs of the combined population are passed to the next iteration of the evolutionary process. We discuss the evolution strategies method itself, as well as its application to the specific problem of the design of an incoherently coupled triple-étalon interferometer intended for use as a focal plane instrument in the planned National Solar Observatory's Advanced Technology Solar Telescope (NSO's ATST). The algorithm converges quickly to a reasonable design that is well within the constraints imposed on the design variables, and which fulfills all resolution, signal-to-noise, throughput, and parasitic band suppression requirements.
We present four preliminary designs for a telecentric optical train supporting the Advanced Technology Solar Telescope (ATST) multiple Fabry-Pérot interferometer (MFPI), which is to be used as an imaging spectrometer and imaging spectropolarimeter. The point of departure for all three designs is the F/40 telecentric image at the Coudé focus of the ATST. The first design, representing the high-spectral-resolution mode of operation, produces an intermediate F/300 telecentric image within the triple étalon system and a 34-arcsec field of view (FOV). The second design, intermediate between high- and low-spectral-resolution modes of operation, produces an intermediate F/150 telecentric image at the étalons and a 1.1-arcmin FOV. The third and fourth designs each represent a low-resolution mode of operation, producing an F/82 telecentric image at the étalons and a 2-arcmin FOV. Each design results in good telecentricity and image quality. Departures from telecentricity at the intermediate image plane cause field-dependent shifts of the bandpass peak, which are negligible compared to the bandpass FWHM. The root mean square (rms) geometric spot sizes at the final image plane fit well within the area of a camera pixel, which is itself in accordance with the Nyquist criterion, half the width of the 28-µm-wide resolution element (as determined from the diffraction limit of the ATST). For each configuration, we also examine the impact that the Beckers effect (the pupil apodization caused by the angle-dependent amplitude transmittance of the MFPI) has on the image quality of the MFPI instrument.
This paper will describe the evolution of the Marshall Space Flight Center's (MSFC) electro-optical polarimeter with emphasis on the field-of-view characteristics of the KD*P modulator. Understanding those characteristics was essential to the success of the MSFC solar vector magnetograph. The paper will show how the field-of-view errors of KD*P look similar to the linear polarization patterns seen in simple sunspots and why the placement of the KD*P in a collimated beam was essential in separating the instrumental polarization from the solar signal. Finally, this paper will describe a modulator design which minimizes those field-of-view errors.
The successful augmentation of NASA's X-Ray Cryogenic Facility (XRCF) at the Marshall Space Flight Center (MSFC) to an optical metrology testing facility for the Sub-scale Beryllium Mirror Demonstrator (SBMD) and NGST Mirror Sub-scale Demonstrator (NMSD) programs required significant modifications and enhancements to achieve reliable data. In addition to building and integrating both a helium shroud and a rugged, stable platform to support a wavefront sensor, a custom sensor suite was assembled and integrated to meet the test requirements. The metrology suite consisted of a high-resolution Shack-Hartmann sensor, a point diffraction interferometer, a point spread function camera, and a radius of curvature measuring device.
The evolution from the SBMD and NMSD tests to the Advanced Mirror System Demonstrator (AMSD) program is less dramatic in some ways, such as the reutilization of the existing helium shroud and sensor support structure. However, significant modifications were required to meet the AMSD program's more stringent test requirements and conditions resulting in a substantial overhaul of the sensor suite and test plan. This paper will discuss the instrumentation changes made for AMSD, including the interferometer selection and null optics. The error budget for the tests will be presented using modeling and experimental data. We will show how the facility is ready to meet the test requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.