Folded path reflection and catadioptric optics are of growing interest, especially in the long wave infrared (LWIR), due to continuing demands for reductions in imaging system size, weight and power (SWAP). We present the optical design and laboratory data for a 50 mm focal length low f/# folded-path compact LWIR imaging system. The optical design uses 4 concentric aspheric mirrors, each of which is described by annular aspheric functions well suited to the folded path design space. The 4 mirrors are diamond turned onto two thin air-spaced aluminum plates which can be manually focused onto the uncooled LWIR microbolometer array detector. Stray light analysis will be presented to show how specialized internal baffling can be used to reduce stray light propagation through the folded path optical train. The system achieves near diffraction limited performance across the FOV with a 15 mm long optical train and a 5 mm back focal distance. The completed system is small enough to reside within a 3 inch diameter ball gimbal.
Wide field-of-view infrared sensor and data acquisition and exploitation systems are
being developed and tested for detecting activity and threats over extended areas.
Limitations on the total number of pixels available in infrared arrays precipitate
sensor design discussions on achieving the widest total field-of-view while achieving
small ground sample distance to allow automated tracking and activity detection. In
order to allow accurate imagery geo-location, the sensors optical characteristics as
well as its location and orientation must be accurately recorded with each image.
This paper will discuss system considerations of infrared imaging sensors for wide
area persistent surveillance. We will present some uses of an advanced day/night
sensors for wide area persistent surveillance that use large, high quality mid-wave
infrared (MWIR) staring arrays in a fast step-stare stabilized mount and a Windows
based data acquisition and exploitation system.
FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) is an ONR funded
effort to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms
(payload weight < 50 lbs). This program is being directed and executed by the Naval Research Laboratory (NRL) in
conjunction with the Space Dynamics Laboratory (SDL). FEATHAR has developed and integrated EyePod, a combined
long-wave infrared (LWIR) and visible to near infrared (VNIR) optical survey & inspection system, with NuSAR, a
combined dual band synthetic aperture radar (SAR) system. These sensors are being tested in conjunction with other
ground and airborne sensor systems to demonstrate intelligent real-time cross-sensor cueing and in-air data fusion.
Results from test flights of the EyePod and NuSAR sensors will be presented.
EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS).
EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared
(LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and
Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research
Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and
test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50
lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area
survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric
imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted
to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and
unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes
of operation, and results from recent flight demonstrations.
NuSAR (Naval Research Laboratory Unmanned Synthetic Aperture Radar) is a sensor developed under the ONRfunded
FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program.
FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space
Dynamics Laboratory (SDL). FEATHAR's goal is to develop and test new tactical sensor systems specifically designed
for small manned and unmanned platforms (payload weight < 50 lbs). NuSAR is a novel dual-band (L- and X-band)
SAR capable of a variety of tactically relevant operating modes and detection capabilities. Flight test results will be
described for narrow and wide bandwidth and narrow and wide azimuth aperture operating modes.
The Naval Research Laboratory (NRL) and Space Dynamics Laboratory (SDL) are executing a joint effort,
DUSTER (Deployable Unmanned System for Targeting, Exploitation, and Reconnaissance), to develop and
test a new tactical sensor system specifically designed for Tier II UAVs. The system is composed of two
coupled near-real-time sensors: EyePod (VNIR/LWIR ball gimbal) and NuSAR (L-band synthetic aperture
radar). EyePod consists of a jitter-stabilized LWIR sensor coupled with a dual focal-length optical system
and a bore-sighted high-resolution VNIR sensor. The dual focal-length design coupled with precision
pointing an step-stare capabilities enable EyePod to conduct wide-area survey and high resolution inspection
missions from a single flight pass. NuSAR is being developed with partners Brigham Young University
(BYU) and Artemis, Inc and consists of a wideband L-band SAR capable of large area survey and embedded
real-time image formation. Both sensors employ standard Ethernet interfaces and provide geo-registered
NITFS output imagery. In the fall of 2007, field tests were conducted with both sensors, results of which will
be presented.
Previous studies introduced, examined, and tested a variety of registration-free transforms, specifically the diagonal, whitening/dewhitening, and target CV (Covariance) transforms. These transforms temporally evolve spectral object signatures under varying conditions using imagery of regions of similar objects and content distribution from data sets collected at two different times. The transformed object signature is then inserted into the matched filter to search for targets. Spatial registration of two areas and/or finding two suitable candidate regions for the transforms is often problematic. This study examines and finds that the average correlation coefficient between the corrected histograms of the multi-spectral image cube collected at two times can assess the similarity of the areas and predict object detection performance. This metric is applied in four distinctive situations and tested on three independently collected data sets. In one data set, the correlation between histograms was taken from an airborne long wave infrared sensor that imaged objects in Florida and tested on registered images modified by systematically eliminating opposed ends of the image set. The other data set examined images of objects in Yellowstone National Park from a visible/near IR multi-spectral sensor. This comparison was also applied to images collected using oblique angles (depression angle of 10°) of objects placed at Webster Field in Maryland. Candidate heterogeneous image areas were compared to each other using the average correlation coefficient and inserted into statistical transforms. In addition the correlations were computed between corrected histograms based on the normalized difference vegetation index (NDVI). Similarly, the analysis is applied to data collected at oblique angles (10° depression angle). The net signal to clutter ratio depends on the average correlation coefficient and has low p-values (p<0.05). All statistical transforms (diagonal, whitening/dewhitening, target CV) performed comparably using the various backgrounds and scenarios. Objects that are spectrally distinct from the backgrounds followed the average correlation coefficient more closely than objects whose spectral signatures contained background components. This study is the first to examine the similarity of the corrected histograms and does not exclude other approaches for comparing areas.
The NRL Optical Sciences Division has developed and demonstrated ground and airborne-based control, display, and exploitation stations for simultaneous use of multiple dissimilar unmanned aerial vehicle (UAV) Intelligence, Surveillance, and Reconnaissance (ISR) systems. The demonstrated systems allow operation on airborne and ground mobile platforms and allow for the control and exploitation of multiple on-board airborne and/or remote unmanned sensor systems simultaneously. The sensor systems incorporated into the control and display stations include visible and midwave infrared (EO/MWIR) panchromatic and visible through short wave infrared (VNIR-SWIR) hyperspectral (HSI) sensors of various operational types (including step-stare, push-broom, whisk-broom, and video). Demonstrated exploitation capabilities include real-time screening, sensor control, pre-flight and real-time payload/platform mission planning, geo-referenced imagery mosaicing, change detection, stereo imaging, moving target tracking, and networked dissemination to distributed exploitation nodes (man-pack, vehicle, and command centers). Results from real-time flight tests using ATR, Finder, and TERN UAV's are described.
The NRL Optical Sciences Division has initiated a multi-year effort to develop and demonstrate an airborne net-centric suite of multi-intelligence (multi-INT) sensors and exploitation systems for real-time target detection and targeting product dissemination. The goal of this Net-centric Multi-Intelligence Fusion Targeting Initiative (NCMIFTI) is to develop an airborne real-time intelligence gathering and targeting system that can be used to detect concealed, camouflaged, and mobile targets. The multi-INT sensor suite will include high-resolution visible/infrared (EO/IR) dual-band cameras, hyperspectral imaging (HSI) sensors in the visible-to-near infrared, short-wave and long-wave infrared (VNIR/SWIR/LWIR) bands, Synthetic Aperture Radar (SAR), electronics intelligence sensors (ELINT), and off-board networked sensors. Other sensors are also being considered for inclusion in the suite to address unique target detection needs. Integrating a suite of multi-INT sensors on a single platform should optimize real-time fusion of the on-board sensor streams, thereby improving the detection probability and reducing the false alarms that occur in reconnaissance systems that use single-sensor types on separate platforms, or that use independent target detection algorithms on multiple sensors. In addition to the integration and fusion of the multi-INT sensors, the effort is establishing an open-systems net-centric architecture that will provide a modular “plug and play” capability for additional sensors and system components and provide distributed connectivity to multiple sites for remote system control and exploitation.
An upgraded digital reconnaissance pod payload, denoted as “Full-Capability” (F-CAP), has been developed and demonstrated as part of the F-14 TARPS-CD (Tactical Air Reconnaissance Pod -- Completely Digital) effort. A key improvement is the incorporation of the NRL-developed ARIES (Airborne Real-time Image Exploitation System) circuit card into the Reconnaissance Management System for in-cockpit display, processing, and geo-location of imagery from the TARPS-CD digital framing camera system. A special cockpit control panel allows the aircrew to quickly manipulate the video images (e.g., pan, zoom and roam), and create an image segment. The annotated image segment can then be relayed via the F-14 Fast Tactical Imagery (FTI) link to the carrier or to a strike aircraft for target prosecution. A solid-state recorder allows near-instantaneous retrieval of full-resolution imagery recorded earlier, for use by the ARIES card or for transmittal to the ground/carrier via the 274-Mbps CDL link. The F-CAP pod is being evaluated in operational exercises by F-14 squadron VF-32 aboard the carrier USS Harry S Truman.
The prototype effort within the SHAred Reconnaissance Pod (SHARP) program successfully demonstrated real-time reconnaissance operation of the prototype SHARP system on an F/A-18F and of the prototype SHARP payload on a P-3 in coordinated flights, each aircraft downlinking imagery to a NAVIS ground station and displaying that imagery in real time on August 28, 2001 in Washington, DC. The principal technology objectives - to verify that dual-band camera technology was sufficiently mature and that the SHARP Reconnaissance Management System (SRMS) with its operating software could control the SHARP subsystems and deliver real-time high-bandwidth reconnaissance imagery - were achieved through demonstration flights. The prototype SHARP Pod system is now used as a test asset in support of the E&MD phase of the SHARP program. Further development of technology for SHARP is continuing. The Airborne Real-time Imagery Exploitation System (ARIES) has been developed for incorporation into the SRMS to provide the flight crew enhanced image exploitation capability for time critical strike. ARIES capability is undergoing continuing development and evaluataion in combination with Fast Tactical Imagery (FTI) real-time, cockpit-to-user, transmission of the selected imagery.
The US Navy has developed and demonstrated rapid targeting technologies that, when combined, support a time critical strike capability. The interaction of these existing technologies (i.e. real-time tactical manned reconnaissance, real-time image screening/geo-registration, and image seeking smart munitions) provide an immediate rapid targeting capability and lay the foundation for future manned and fully automated time critical strike systems. This paper describes the existing rapid targeting technologies and the concept of operations for integrating these technologies for time critical strike. It also projects potential time strike capabilities that can be achieved and deployed in the near future.
The volume of digital imagery generated by existing and planned airborne reconnaissance systems requires the use of lossy compression techniques in order to store the real-time imagery on-board or transmit it to a ground station. The government is migrating compression used for reconnaissance applications from proprietary techniques to a national and international standards in order to provide a more seamless image dissemination path. This paper describes the requirements for image compression in advanced tactical reconnaissance system and compares the performance of national and international lossy compression techniques.
This paper describes the effect of using lossy transform- based compression on IR framing sensor data with and without sensor anomaly pre-processing. Significant image degradation persists when ground-based non-uniformity correction processing is implemented after the sensor imagery has been compressed and reconstructed. Various techniques for non- uniformity correction, from low to high processing complexity, are applied before and after lossy compression at varying compression ratios. Results using both DCT and wavelet transform-based compression techniques indicate that on-board real-time compression algorithms and nonuniformity correction must be jointly optimized and that direct application of lossy compression without preprocessing for sensor anomalies reduces not only the compression efficiency and image fidelity, but also the performance of subsequent ground-based nonuniformity correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.