Timely response to an oil spill requires continuous areal monitoring of the ocean surface around oil facilities. Polaris Sensor Technologies has tested the Pyxis uncooled microbolometer-based polarimetric camera at the Department of Interior’s Ohmsett test facility for oil spill response over the last several years and has demonstrated excellent performance of detection of multiple types of crude oil, diesel, and kerosene in still water, in waves, during the day and overnight, and even showed strong detection of emulsified oil in waves. Further testing in the Gulf of Mexico and at the Santa Barbara seeps has also been completed. In this paper, we report these test results as well as the Pyxis-based autonomous detection system for continuous monitoring. Finally, we will describe potential operational scenarios, including deployment on fixed, floating, and drone platforms, in which this technology could be exploited for spill recovery operations as well as for automated monitoring.
Manmade objects typically exhibit polarized signatures that can be utilized for target detection more effectively than thermal signatures alone. IR polarimetric imaging can also suppress clutter to highlight manmade objects in thermal equilibrium. However, quantifying the contrast improvement of polarimetric signatures over that of thermal signatures of the same scene has proved problematic due to the nature and dynamic range of the data. The Pyxis camera, a microbolometer-based imaging polarimeter that produces live polarimetric video of conventional, polarimetric, and fused image products and is small enough to be mounted on commercial drones, has collected datasets that demonstrate the improvement. In this paper, we describe the Pyxis camera, show representative data, and present the metrics used to characterize the improvement of polarimetric imaging over that of conventional thermal imaging.
Infrared polarimetry is an emerging sensing modality that offers the potential for significantly enhanced contrast in situations where conventional thermal imaging falls short. Polarimetric imagery leverages the different polarization signatures that result from material differences, surface roughness quality, and geometry that are frequently different from those features that lead to thermal signatures. Imaging of the polarization in a scene can lead to enhanced understanding, particularly when materials in a scene are at thermal equilibrium. Polaris Sensor Technologies has measured the polarization signatures of oil on water in a number of different scenarios and has shown significant improvement in detection through the contrast improvement offered by polarimetry. The sensing improvement offers the promise of automated detection of oil spills and leaks for routine monitoring and accidents with the added benefit of being able to continue monitoring at night. In this paper, we describe the instrumentation, and the results of several measurement exercises in both controlled and uncontrolled conditions.
Infrared polarimetry is an emerging sensing modality that offers the potential for significantly enhanced contrast in situations where conventional thermal imaging falls short. Polarimetric imagery leverages the different polarization signatures that result from material differences, surface roughness quality, and geometry that are frequently different from those features that lead to thermal signatures. Imaging of the polarization in a scene can lead to enhanced understanding, particularly when materials in a scene are at thermal equilibrium. Polaris Sensor Technologies has measured the polarization signatures of oil on water in a number of different scenarios and has shown significant improvement in detection through the contrast improvement offered by polarimetry. The sensing improvement offers the promise of automated detection of oil spills and leaks for routine monitoring and accidents with the added benefit of being able to continue monitoring at night. In this paper, we describe the instrumentation, and the results of several measurement exercises in both controlled and uncontrolled conditions.
The instrumentation for measuring infrared polarization signatures has seen significant advancement over the last decade.
Previous work has shown the value of polarimetric imagery for a variety of target detection scenarios including detection
of manmade targets in clutter and detection of ground and maritime targets while recent work has shown improvements in
contrast for aircraft detection and biometric markers. These data collection activities have generally used laboratory or
prototype systems with limitations on the allowable amount of target motion or the sensor platform and usually require an
attached computer for data acquisition and processing. Still, performance and sensitivity have been steadily getting better
while size, weight, and power requirements have been getting smaller enabling polarimetric imaging for a greater or real
world applications.
In this paper, we describe Pyxis®, a microbolometer based imaging polarimeter that produces live polarimetric video of
conventional, polarimetric, and fused image products. A polarization microgrid array integrated in the optical system
captures all polarization states simultaneously and makes the system immune to motion artifacts of either the sensor or the
scene. The system is battery operated, rugged, and weighs about a quarter pound, and can be helmet mounted or handheld.
On board processing of polarization and fused image products enable the operator to see polarimetric signatures in real
time. Both analog and digital outputs are possible with sensor control available through a tablet interface. A top level
description of Pyxis® is given followed by performance characteristics and representative data.
Infrared polarization relies on surface temperature, roughness, material properties, aspect angle to the sensor, sky down-welling and background radiance reflecting from the target. Often times, the polarization signature of a manmade target is different than the surrounding background. Furthermore, that difference is often present even when the thermal signature of the same target blends into the background. This paper will present maritime, airborne and ground data sets of polarization signatures of several objects that allow detection when other methods fall short.
Improved situational awareness results not only from improved performance of imaging hardware, but also when the
operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the
contrast available. A significant improvement in effective contrast for the operator can result when depth perception is
added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of
3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in
order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for
integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly
reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the
operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as
integration onto the robot platform will be given along with description of the stand alone operation.
KEYWORDS: 3D displays, Visualization, Video, Cameras, 3D modeling, LCDs, 3D image processing, Display technology, Sensor technology, Defense and security
In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in a variety of
applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel
consisting of 1920×1200 pixels at 60 frames per second. Applications include training, mission rehearsal and planning,
and enhanced visualization. Display content can include mixed 2D and 3D data. Source data can be 3D video from
cameras, computer generated imagery, or fused data from a variety of sensor modalities. Recent work involving
generation of 3D terrain from aerial imagery will be demonstrated. Discussion of the use of this display technology in
military and medical industries will be included.
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a
replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for
robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a
TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and
removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display,
replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed
focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness
as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat
panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving,
manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom,
auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the
upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The
stereo gripper camera allows for improved manipulation in typical TALON missions.
In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training
applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel
consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data
can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities.
Discussion of the use of this display technology in military and medical industries will be included. Examples of use in
simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for
DoD mission rehearsal will be presented.
There is a strong need for the ability to terrestrially image resident space objects (RSOs) and other low earth orbit (LEO)
objects for Space Situational Awareness (SSA) applications. The Synthetic Aperture Imaging Polarimeter (SAIP)
investigates an alternative means for imaging an object in LEO illuminated by laser radiation. A prototype array
consisting of 36 division of amplitude polarimeters was built and tested. The design, assembly procedure, calibration
data and test results are presented. All 36 polarimeters were calibrated to a high degree of accuracy. Pupil plane
imaging tests were performed in by using cross-correlation image reconstruction algorithm to determine the prototype
functionality.
The use of tele-operated Unmanned Ground Vehicles (UGVs) for military uses has grown significantly in recent years
with operations in both Iraq and Afghanistan. In both cases the safety of the Soldier or technician performing the mission
is improved by the large standoff distances afforded by the use of the UGV, but the full performance capability of the
robotic system is not utilized due to insufficient depth perception provided by the standard two dimensional video
system, causing the operator to slow the mission to ensure the safety of the UGV given the uncertainty of the perceived
scene using 2D. To address this Polaris Sensor Technologies has developed, in a series of developments funded by the
Leonard Wood Institute at Ft. Leonard Wood, MO, a prototype Stereo Vision Upgrade (SVU) Kit for the Foster-Miller
TALON IV robot which provides the operator with improved depth perception and situational awareness, allowing for
shorter mission times and higher success rates. Because there are multiple 2D cameras being replaced by stereo camera
systems in the SVU Kit, and because the needs of the camera systems for each phase of a mission vary, there are a
number of tradeoffs and design choices that must be made in developing such a system for robotic tele-operation.
Additionally, human factors design criteria drive optical parameters of the camera systems which must be matched to the
display system being used. The problem space for such an upgrade kit will be defined, and the choices made in the
development of this particular SVU Kit will be discussed.
KEYWORDS: 3D vision, 3D visualizations, Cameras, Robotic systems, Analytical research, Imaging systems, Situational awareness sensors, 3D displays, Visualization, 3D acquisition
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and
Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation,
evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground
vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only
standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a
standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative
of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven
scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven
scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of
pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
KEYWORDS: 3D visualizations, 3D displays, Cameras, Robots, Visualization, 3D image processing, Video, Situational awareness sensors, LCDs, Stereo vision systems
The flow of information among our armed forces is greater than ever and the workload on the
warfighter is increasing. A novel, stereo-based 3D display has been developed to aid the warfighter
by displaying information in a more intuitive fashion by exploiting depth perception. The flat panel
display has a footprint consistent with current and future vehicles, unmanned systems, and aircraft
and is capable of displaying analog 3D video and OpenGL 3D imagery. A description of the display
will be given along with discussion of the applications evaluated to date.
A flat panel stereoscopic display has been developed and tested for application in unmanned ground systems.
The flat panel display has a footprint that is only slightly thicker than the same size LCD display and has been
installed in the lid of a TALON OCU. The approach uses stacked LCD displays and produces live stereo
video with passive polarized glasses but no spatial or temporal multiplexing. The analog display, which is
available in sizes from 6.4" diagonal to 17" diagonal, produces 640 × 480 stereo imagery. A comparison of
soldiers' performance using 3D vs. 2D using live stereo video will be given. A description of the display will
be given along with discussion of the testing.
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data
analysis; and the resulting performance assessment of the 3D vision system are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.