Triangle Orientation Discrimination (TOD) developed by TNO Human Factors and Minimum Temperature Difference Perceived (MTDP) developed by Fraunhofer IOSB are competitive measurement methods for the assessment of well and under sampled thermal imagers. Key differences between the two methods are the different targets, bars for MTDP and equilateral triangles for TOD, and the measurement methodology. MTDP bases on a threshold measurement whereas TOD uses a psychophysical approach. A former study compared MTDP and TOD for standard thermal imagers. It found that it is possible to transfer TOD to MTDP and vice versa using a sigmoidal transition function. If this is also valid when applying advanced signal processing remained open. To examine this, the study was now extended to boost filtered thermal imagers. Equipment in test was an under-sampled MWIR imager operating without and with five different boost filters, four different Laplace- and one Wiener-filter. Comparative MTDP and TOD measurements for these six configurations showed the validity of the derived sigmoidal transition function also for this type of signal processing. Conclusive, to date the comparison of MTDP and TOD gave no preference for one of the methods. They should result in comparable performance predictions, although it is generally not possible to rule out that there is advanced signal processing where one or both of the methods may fail.
Range prediction for thermal imagers applying advanced signal processing is still in its infancy. Boost filters are such an advanced signal processing and here it was assessed if the achieved range when using them is in correspondence with predictions based on the Johnson criteria. Equipment in test was an under-sampled MWIR imager operating with and without five different boost filters, four different Laplace- and one Wiener-filter. Range of this imager using the different boost filter was estimated by perception experiments for identification of numbers. These ranges were compared with limiting frequencies derived from Minimum Temperature Difference Perceived (MTDP) measurements including the boost filters. The comparison showed identification range and limiting frequency derived from the MTDP in good correspondence. Thus, the Johnson Criteria should be able to correctly predict range for thermal imagers including boost-filtering. Further work includes extending the comparison to low contrast and to real targets.
KEYWORDS: Spatial frequencies, Cameras, Imaging systems, Thermography, Temperature metrology, Minimum resolvable temperature difference, Sensors, Data modeling, Modulation transfer functions
Triangle Orientation Discrimination (TOD) developed by TNO Human Factors and Minimum Temperature Difference Perceived (MTDP) developed by Fraunhofer IOSB are competitive measurement methods for the assessment of well and under sampled thermal imagers. Key differences between the two methods are the different targets, bars for MTDP and equilateral triangles for TOD, and the measurement methodology. MTDP bases on a threshold measurement whereas TOD uses a psychophysical approach. All advantages and disadvantages of the methods trace back on these differences. The European Computer Model for Optronic System Performance Prediction (ECOMOS) includes range performance assessments according to both methods. This triggered work at Fraunhofer IOSB to do comparative TOD and MTDP measurements. Idea was checking if TOD- and MTDP-curves fall together when transferring the two target descriptive parameters, reciprocal angular subtense (one over triangle size expressed in angular units) and spatial frequency respectively, into each other using a conversion factor or function. Surprisingly, literature does not include such a measurement-based comparison to date. Extending IOSBs existing MTDP set-up with triangle targets and the associated turntable and shutter enabled the comparative measurements. The applied TOD measurement process uses the guidelines found in literature with some necessary adaptions. Both measurements included the same components (blackbody, collimator, monitor etc.) except the targets. Additionally the trained MTDP-observer also did the TOD measurements. Only the methods itself thus should cause differences in the results. Four thermal imagers with different magnitude of under sampling (MTF at Nyquist frequencies about 8 %, 14 %, 32 %, and 73 %) are the basis for the comparison. Their measurements allowed deriving a standard target for triangles according to the process known from target acquisition assessment. These calculations result in 1.5±0.2 line pairs on target. Multiplying reciprocal angular subtense with this factor gives corresponding MTDP and TOD curves when TOD is based 62.5 % instead of the standard 75 % probability. 62.5 % corrected for chance are 50 % probability and thus in correspondence with the threshold assumption of the MTDP. Deviations occur, when reciprocal angular subtense is near the cut-off because of unaccounted sampling effects. The proposed way to overcome this is normalizing spatial frequency and reciprocal angular subtense with camera line spread function full width at half maximum. A sigmoidal transition function is able to describe the resulting connection. This function could be valid for all thermal imagers, as indicated by the assessment of two additional ones. However, as the assessment bases only on six thermal imagers and one observer further comparative measurements by a larger number of observers or, alternatively, modeling is necessary.
Thermal imagers (TI) and low light imagers (LLI; e.g. Night Vision Goggles, NVG) are today’s technologies for nighttime applications. Both possess their own advantages and disadvantages. LLI typically is ineffective in dark areas whereas TI operates also in complete darkness. On the other hand, TI often does not provide enough scene details, hindering situational awareness. Hence, it is nearby to combine the systems for common use. Today such combined systems are available as so-called Fusion Goggles: a LWIR bolometer optically fused to an image intensifier based NVG. Future development will probably replace the NVG by solid-state night vision technologies enabling sophisticated image fusion.
Performance assessment and modeling of fused systems is an open problem. Main reason is the strong scene and task dependency of fusion algorithms. The idea followed here is to divide the task in detection and situational awareness ones and analyze them separately. An experimental common line of sight system consisting of a LWIR-bolometer and a low light CMOS camera was set-up to do so. Its first use was detection performance assessment. A data collection of a human at different distances and positions was analyzed in a perception experiment with the two original bands and eight different fusion methods. Twenty observers participated in the experiment. Their average detection probability clearly depends on the imagery. Although the resolution in LWIR is three times worse than the visual one, the achieved detection performance is much better. This transforms in the fused imagery also, with all fusion algorithms giving better performance than the visual one. However, all fusion algorithms to a different degree decrease observer performance compared to LWIR alone. This result is in good agreement with a Graph-Based Visual Saliency image analysis. Thus, it seems possible to assess fusion performance for the detection task by saliency calculations.
“Nightglow” is an illumination phenomenon created by luminance processes in the higher earth atmosphere. It covers the spectral range from the ultraviolet up to the thermal infrared, but its maximum is found in the shortwave infrared (SWIR). Although known for a long time the advent of high sensitive SWIR detectors in the last decade enables today’s use for night vision applications. In 2013 Fraunhofer IOSB started its assessment of SWIR for night vision applications. The approach was twofold. Continuous measurements were started to get an understanding of the highly variable illumination levels created by the nightglow under different environmental conditions. Future goal here is the standardization of the SWIR illumination levels corresponding to the defined visual full moon, quarter moon, starlight and overcast starlight ones. Additionally, performance assessment of SWIR detectors in comparison to the visual image intensifiers respectively low light focal plane array detectors were conducted in the laboratory as well as in the field. The paper gives history and status of IOSBs assessment of SWIR for night vision applications. It explains the ideas behind the illumination characterization, the conducted measurements and the inherent problem of artificial stray light. For sensor assessment it presents recent work on the influence of the spectral coverage (e. g. broadband versus atmospheric window only) on system performance for different environmental conditions.
The short wave infrared spectral range caused interest to be used in day and night time military and security applications in the last years. This necessitates performance assessment of SWIR imaging equipment in comparison to the one operating in the visual (VIS) and thermal infrared (LWIR) spectral range. In the military context (nominal) range is the main performance criteria. Discriminating friend from foe is one of the main tasks in today’s asymmetric scenarios and so personnel, human activities and handheld objects are used as targets to estimate ranges. The later was also used for an experiment at Fraunhofer IOSB to get a first impression how the SWIR performs compared to VIS and LWIR. A human consecutively carrying one of nine different civil or military objects was recorded from five different ranges in the three spectral ranges. For the visual spectral range a 3-chip color-camera was used, the SWIR range was covered by an InGaAs-camera and the LWIR by an uncooled bolometer. It was ascertained that the nominal spatial resolution of the three cameras was in the same magnitude in order to enable an unbiased assessment. Daytime conditions were selected for data acquisition to separate the observer performance from illumination conditions and to some extend also camera performance. From the recorded data, a perception experiment was prepared. It was conducted as a nine-alternative forced choice, unlimited observation time test with 15 observers participating. Before the experiment, the observers were trained on close range target data. Outcome of the experiment was the average probability of identification versus range between camera and target. The comparison of the range performance achieved in the three spectral bands gave a mixed result. On one hand a ranking VIS / SWIR / LWIR in decreasing order can be seen in the data, but on the other hand only the difference between VIS and the other bands is statistically significant. Additionally it was not possible to explain the outcome with typical contrast metrics. Probably form is more important than contrast here as long as the contrast is generally high enough. These results were unexpected and need further exploration.
Radiation created by stimulation and recombination/deactivation of atoms and molecules in the higher earth atmosphere is called nightglow. This nightglow can be found in the spectral range from the ultraviolet up to the thermal infrared, with a maximum in the shortwave infrared (SWIR). During moonless nights the illumination in the SWIR is by an order of magnitude higher than the visual one. Within the last years the SWIR sensor technology improved to a level of using the nightglow for night vision applications. This necessitates understanding of the highly variable illumination levels created by the nightglow and the performance assessment of the SWIR detectors in comparison to the image intensifiers respectively Si focal plane array detectors. Whereas the night illumination levels for the visual are standardized, corresponding ones for the SWIR are missing. IOSB started measuring and comparing night illumination levels and camera performance in both spectral ranges based on continuous illumination measurements as well as recording imagery of reflectance reference targets with cameras and analyzing the resulting signal-to-noise ratios. To date the number of illumination measurements are not yet statistically sufficient to standardize the levels, but at least allowed a first comparison of the two technologies for moonless night, clear sky conditions. With comparable F-number, integration time and frame rate, the SWIR sensors available in Europe were found to be inferior to the visual technology. An improvement of at least one magnitude would be necessary to ensure similarity between SWIR and visual technologies for all environmental conditions.
A main criterion for comparison and selection of thermal imagers for military applications is their nominal range
performance. This nominal range performance is calculated for a defined task and standardized target and environmental conditions. The only standardization available to date is STANAG 4347. The target defined there is based on a main battle tank in front view. Because of modified military requirements, this target is no longer up-to-date. Today, different topics of interest are of interest, especially differentiation between friend and foe and identification of humans. There is no direct way to differentiate between friend and foe in asymmetric scenarios, but one clue can be that someone is carrying a weapon. This clue can be transformed in the observer tasks detection: a person is carrying or is not carrying an object, recognition: the object is a long / medium / short range weapon or civil equipment and identification: the object can be named (e. g. AK-47, M-4, G36, RPG7, Axe, Shovel etc.). These tasks can be assessed experimentally and from the results of such an assessment, a standard target for handheld objects may be derived. For a first assessment, a human carrying 13 different handheld objects in front of his chest was recorded at four different ranges with an IR-dual-band camera. From the recorded data, a perception experiment was prepared. It was conducted with 17 observers in a 13-alternative forced choice, unlimited observation time arrangement. The results of the test together with Minimum Temperature Difference Perceived measurements of the camera and temperature difference and critical dimension derived from the recorded imagery allowed defining a first standard target according to the above tasks. This standard target consist of 2.5 / 3.5 / 5 DRI line pairs on target, 0.24 m critical size and 1 K temperature difference. The values are preliminary and have to be refined in the future. Necessary are different aspect angles, different carriage and movement.
Modern IR cameras are increasingly equipped with built-in advanced (often non-linear) image and signal processing
algorithms (like fusion, super-resolution, dynamic range compression etc.) which can tremendously influence
performance characteristics. Traditional approaches to range performance modeling are of limited use for these types of
equipment. Several groups have tried to overcome this problem by producing a variety of imagery to assess the impact of
advanced signal and image processing. Mostly, this data was taken from classified targets and/ or using classified imager
and is thus not suitable for comparison studies between different groups from government, industry and universities. To
ameliorate this situation, NATO SET-140 has undertaken a systematic measurement campaign at the DGA technical
proving ground in Angers, France, to produce an openly distributable data set suitable for the assessment of fusion,
super-resolution, local contrast enhancement, dynamic range compression and image-based NUC algorithm
performance. The imagery was recorded for different target / background settings, camera and/or object movements and
temperature contrasts. MWIR, LWIR and Dual-band cameras were used for recording and were also thoroughly
characterized in the lab. We present a selection of the data set together with examples of their use in the assessment of
super-resolution and contrast enhancement algorithms.
Dual-band thermal imagers acquire information simultaneously in both the 8-12 μm (long-wave infrared, LWIR) and the
3-5 μm (mid-wave infrared, MWIR) spectral range. Compared to single-band thermal imagers they are expected to have
several advantages in military applications. These advantages include the opportunity to use the best band for given
atmospheric conditions (e. g. cold climate: LWIR, hot and humid climate: MWIR), the potential to better detect
camouflaged targets and an improved discrimination between targets and decoys. Most of these advantages have not yet
been verified and/or quantified. It is expected that image fusion allows better exploitation of the information content
available with dual-band imagers especially with respect to detection of targets.
We have developed a method for dual-band image fusion based on the apparent temperature differences in the two
bands. This method showed promising results in laboratory tests. In order to evaluate its performance under operational
conditions we conducted a field trial in an area with high thermal clutter. In such areas, targets are hardly to detect in
single-band images because they vanish in the clutter structure. The image data collected in this field trial was used for a
perception experiment. This perception experiment showed an enhanced target detection range and reduced false alarm
rate for the fused images compared to the single-band images.
KEYWORDS: Mid-IR, Modulation transfer functions, Long wavelength infrared, Image fusion, Temperature metrology, Quantum well infrared photodetectors, Sensors, Thermography, Calibration, Cameras
The IR-Dual-Band-Camera demonstrator collects simultaneously infrared data in the 3-5 μm (mid-wave infrared,
MWIR) and 8-12 μm (long-wave infrared, LWIR) atmospheric windows. The demonstrator is based on a two-layer
QWIP focal plane array with 384 x 288 x 2 detector elements. Images are typically acquired with a frame rate of 100 Hz
at 6.8 ms integration time and are stored as 14-bit digital data. Two different IR-Dual-Band-Optics were designed and
developed: first an 86 mm and 390 mm focal length, F/2 dual field of view optics based on refractive and reflective
components and second a pure refractive 100 mm focal length, F/1.5 optics. We present the performance of this IR-Dual-
Band-Camera and demonstrate fusion techniques to the pixel-registered dual-band images which show in laboratory tests
and field trials promising results with respect to image improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.