Object detection AI’s enable robust solutions for fast, automated detection of anomalies in operating environments such as airfields. Implementation of AI solutions requires training models on a large and diverse corpus of representative training data. To reliably detect craters and other damage on airfields, the AI must be trained on a large, varied, and realistic set of images of craters and other damage. The current method for obtaining this training data is to set explosives in the concrete surface of a test airfield to create actual damage and to record images of this real data. This approach is extremely expensive and time consuming, results in relatively little data representing just a few damage cases and does not adequately represent damage to UXO and other artifacts that are detected. To address this paucity of training data, we have begun development of a training data generation and labeling pipeline that leverages Unreal Engine 4 to create realistic synthetic environments populated with realistic damage and artifacts. We have also developed a labeling system for automatic labeling of the detection segments in synthetic training images, in order to provide relief from the tedious and time-consuming process of manually labeling segments in training data and eliminate human errors incurred by manual labeling. We present comparisons of performance of two object detection AI’s trained on real and synthetic data and discuss cost and schedule savings enabled by the automated labeling system used for labeling of detection segments.
We present results from testing a multi-modal sensor system (consisting of a camera, LiDAR, and positioning system) for real-time object detection and geolocation. The system’s eventual purpose is to assess damage and detect foreign objects on a disrupted airfield surface to reestablish a minimum airfield operating surface. It uses an AI to detect objects and generate bounding boxes or segmentation masks in data acquired with a high-resolution area scan camera. It locates the detections in the local, sensor-centric coordinate system in real time using returns from a low-cost commercial LiDAR system. This is accomplished via an intrinsic camera calibration together with a 3D extrinsic calibration of the camera- LiDAR pair. A coordinate transform service uses data from a navigation system (comprising an inertial measurement unit and global positioning system) to transform local coordinates of the detections obtained with the AI and calibrated sensor pair into earth-centered coordinates. The entire sensor system is mounted on a pan-tilt unit to achieve 360-degree perception. All data acquisition and computation are performed on a low SWAP-C system-on-module that includes an integrated GPU. Computer vision code runs real-time on the GPU and has been accelerated using CUDA. We have chosen Robot Operating System (ROS1 at present but porting to ROS2 in the near term) as the control framework for the system. All computer vision, motion, and transform services are configured as ROS nodes.
KEYWORDS: Clouds, LIDAR, Detection and tracking algorithms, Data modeling, Algorithm development, Data acquisition, Sensors, Visualization, Object recognition, C++
The ability to rapidly assess damage to military infrastructure after an attack is the object of ongoing research. In the case
of runways, sensor systems capable of detecting and locating craters, spall, unexploded ordinance, and debris are necessary
to quickly and efficiently deploy assets to restore a minimum airfield operating surface. We describe measurements
performed using two commercial, robotic scanning LiDAR systems during a round of testing at an airfield. The LiDARs
were used to acquire baseline data and to conduct scans after two rounds of demolition and placement of artifacts for the
entire runway. Configuration of the LiDAR systems was sub-optimal due to availability of only two platforms for
placement of sensors on the same side of the runway. Nevertheless, results prove that the spatial resolution, accuracy, and
cadence of the sensors is sufficient to develop point cloud representations of the runway sufficient to distinguish craters,
debris and most UXO. Location of a complementary set of sensors on the opposite side of the runway would alleviate the
observed shadowing, increase the density of the registered point cloud, and likely allow detection of smaller artifacts.
Importantly, the synoptic data acquired by these static LiDAR sensors is dense enough to allow registration (fusion) with
the smaller, denser, targeted point cloud data acquired at close range by unmanned aerial systems. The paper will also
discuss point cloud manipulation and 3D object recognition algorithms that the team is developing for automatic detection
and geolocation of damage and objects of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.