A robust image dehazing algorithm based on the first-order scattering of the image degradation model is proposed. In this work, there are three contributions toward image dehazing: (i) a robust method for assessing the global irradiance from the most hazy-opaque regions of the imagery is proposed; (ii) more detailed depth information of the scene can be recovered through the enhancement of the transmission map using scene partitions and entropy-based alternating fast-weighted guided filters; and (iii) crucial model parameters are extracted from in-scene information. This paper briefly outlines the principle of the proposed technique and compares the dehazed results with four other dehazing algorithms using a variety of different types of imageries. The dehazed images have been assessed through a quality figure-of-merit, and experiments have shown that the proposed algorithm effectively removes haze and has achieved a much better quality of dehazed images than all other state-of-the-art dehazing methods employed in this work.
Novel types of spectral sensors using coded apertures may offer various advantages over conventional designs, especially the possibility of compressive measurements that could exceed the expected spatial, temporal or spectral resolution of the system. However, the nature of the measurement process imposes certain limitations, especially on the noise performance of the sensor. This paper considers a particular type of coded-aperture spectral imager and uses analytical and numerical modelling to compare its expected noise performance with conventional hyperspectral sensors. It is shown that conventional sensors may have an advantage in conditions where signal levels are high, such as bright light or slow scanning, but that coded-aperture sensors may be advantageous in low-signal conditions.
People tracking in crowded scenes from closed-circuit television (CCTV) footage has been a popular and challenging task in computer vision. Due to the limited spatial resolution in the CCTV footage, the color of people’s dress may offer an alternative feature for their recognition and tracking. However, there are many factors, such as variable illumination conditions, viewing angles, and camera calibration, that may induce illusive modification of intrinsic color signatures of the target. Our objective is to recognize and track targets in multiple camera views using color as the detection feature, and to understand if a color constancy (CC) approach may help to reduce these color illusions due to illumination and camera artifacts and thereby improve target recognition performance. We have tested a number of CC algorithms using various color descriptors to assess the efficiency of target recognition from a real multicamera Imagery Library for Intelligent Detection Systems (i-LIDS) data set. Various classifiers have been used for target detection, and the figure of merit to assess the efficiency of target recognition is achieved through the area under the receiver operating characteristics (AUROC). We have proposed two modifications of luminance-based CC algorithms: one with a color transfer mechanism and the other using a pixel-wise sigmoid function for an adaptive dynamic range compression, a method termed enhanced luminance reflectance CC (ELRCC). We found that both algorithms improve the efficiency of target recognitions substantially better than that of the raw data without CC treatment, and in some cases the ELRCC improves target tracking by over 100% within the AUROC assessment metric. The performance of the ELRCC has been assessed over 10 selected targets from three different camera views of the i-LIDS footage, and the averaged target recognition efficiency over all these targets is found to be improved by about 54% in AUROC after the data are processed by the proposed ELRCC algorithm. This amount of improvement represents a reduction of probability of false alarm by about a factor of 5 at the probability of detection of 0.5. Our study concerns mainly the detection of colored targets; and issues for the recognition of white or gray targets will be addressed in a forthcoming study.
A major problem for obtaining target reflectance via hyperspectral imaging systems is the presence of illumination and shadow effects. These factors are common artefacts, especially when dealing with a hyperspectral imaging system that has sensors in the visible to near infrared region. This region is known to have highly scattered and diffuse radiance that can modify the energy recorded by the imaging system. A shadow effect will lower the target reflectance values due to the small radiant energy impinging on the target surface. Combined with illumination artefacts, such as diffuse scattering from the surrounding targets, background or environment, the shape of the shadowed target reflectance will be altered. We propose a new method to compensate for illumination and shadow effects on hyperspectral imageries by using a polarization technique. This technique, called spectro-polarimetry, estimates the direct and diffuse irradiance based on two images taken with and without a polarizer. The method is then evaluated using a spectral similarity measure, angle and distance metric. The results of indoor and outdoor tests have shown that using the spectro-polarimetry technique can improve the spectral constancy between shadow and full illumination spectra.
People tracking in crowded scene have been a popular, and at the same time a very difficult topic in computer vision. It
is mainly because of the difficulty for the acquisition of intrinsic signatures of targets from a single view of the scene.
Many factors, such as variable illumination conditions and viewing angles, will induce illusive modification of intrinsic
signatures of targets. The objective of this paper is to verify if colour constancy (CC) approach really helps people
tracking in CCTV network system. We have testified a number of CC algorithms together with various colour
descriptors, to assess the efficiencies of people recognitions from multi-camera i-LIDS data set via receiver operation
characteristics (ROC). It is found that when CC is applied together with some form of colour restoration mechanisms
such as colour transfer, it does improve people recognition by at least a factor of 2. An elementary luminance based CC
coupled with a pixel based colour transfer algorithm have been developed and it is reported in this paper.
Military aircraft face a serious threat from early generation
Man-Portable Air-Defence (MANPAD) systems. Robust
countermeasures have to be used to counteract this threat. Most commonly these are used after the threat has been
launched and detected. The ideal solution is to defeat the system
pre-emptively before the missile is launched. One way
to achieve this is to fire pre-emptive flares giving the MANPAD another hot source to track and lock-on to. However,
use of pre-emptive flares can quickly deplete the flare magazines limiting the mission time and the area in which the
aircraft will be protected. In this paper we discuss the use of CounterSim, a missile engagement and countermeasure
simulation software tool, to investigate what effect the flare output and burn time may have on the effectiveness of preemptive
countermeasures. The first set of simulations looks at a flare of full intensity and burn time pre-emptively
released at the beginning of the simulations. Then, flares of reduced intensity and reduced burn time are used. In a
second set of simulations the pre-emptive flare release time is investigated by delaying the firing up to one second from
the beginning of the simulation.
Man-Portable Air-Defence (MANPAD) systems can employ a range of counter-countermeasures (CCM) to reject
expendable IR decoys. Three hypothetical MANPAD models are based on reticle types and CCM features that may be
found in 1st and 2nd generation MANPADs. These are used in simulations to estimate the probability of escaping hit
(PEH) when no IR decoys are used, when IR decoys are deployed reactively and when decoys are deployed preemptively.
These cases are simulated for seekers with no CCM and with a track angle bias CCM.
The results confirm that the rise rate CCM significantly reduces the PEH when IR decoys are used reactively. The use of
pre-emptive flares timed to deploy at or about the time when the seeker is uncaged increases the PEH significantly. A
more detailed investigation of the effects of aircraft aspect angle and flare timing on miss distance was carried out to
examine the effects of the CCM compared with no CCM. With the aircraft at an altitude of 1000m and a range of 2km
there is a critical period in which a flare needs to be released in order to achieve a significant miss distance when the
CCM is in use. The conical scan seeker used with the track angle bias CCM was the most effective combination
requiring the shortest time during which the flare had to be deployed. Further simulations at longer ranges and different
aircraft azimuth angles showed that there is a time window that is range dependant during which pre-emptive decoys are
fully effective independently of the aircraft azimuth or threat direction.
Hyperspectral imaging (HSI) systems have been used widely in many applications including the defence and military for
target acquisitions. However, the effectiveness of HSI can be greatly hampered by illumination artifacts such as
shadowing or bidirectional reflection differentials issues. This paper addresses how shadows in the HSI, particularly for
the imageries that are taken in the indoor scenarios, can be partially mitigated through a diffused irradiance
compensation (DIC) methodology. The effectiveness of the proposed work is then compared with the widely adopted
pixel normalisation and band ratioing methods. The performances of all these processing methods have been assessed
using Maximum Likelihood Classifier. The result has shown an almost 70% improvement in classification accuracy after
the raw DN data is translated into 'apparent' reflectance using simple ELM based method, and the classification
accuracy after spectral normalisation is ~26% worse than without normalization. When the proposed diffused irradiance
compensation (DIC) is combined with other band ratioing techniques, the classification accuracy is found to be improved
by ~7% over that processed by the ELM method for the entire scene. There are about 32% of shadowed pixels in this
data set and hence 7% of improvement represents a significant improvement on the shadow mitigation.
This paper reports on the enhancement of biologically-inspired machine vision through a rotation invariance mechanism.
Research over the years has suggested that rotation invariance is one of the fundamental generic elements of object
constancy, a known generic visual ability of the human brain.
Cortex-like vision unlike conventional pixel based
machine vision is achieved by mimicking neuromorphic mechanisms of the primates' brain. In this preliminary study,
rotation invariance is implemented through histograms from Gabor features of an object. The performance of rotation
invariance in the neuromorphic algorithm is assessed by the classification accuracies of a test data set which consists of
image objects in five different orientations. It is found that a much more consistent classification result over these five
different oriented data sets has been achieved by the integrated rotation invariance neuromorphic algorithm compared to
the one without. In addition, the issue of varying aspect ratios of input images to these models is also addressed, in an
attempt to create a robust algorithm against a wider variability of input data. The extension of the present achievement is
to improve the recognition accuracies while incorporating it to a series of different real-world scenarios which would
challenge the approach accordingly.
The proliferation of early generation Man-Portable Air-Defence (MANPAD) weapon worldwide results in a significant
threat to all aircraft. To develop successful countermeasures to the MANPAD a more detailed understanding of the
factors affecting the missile engagement is needed. This paper discusses the use of CounterSim, a missile engagement
and countermeasure simulation software tool, to model such scenarios. The work starts by analysing simple engagements
of a first generation MANPAD against a fast jet with no countermeasures being employed. The engagement simulations
cover typical MANPAD ranges and aircraft altitudes quoted in open source literature. From this set of base runs,
individual engagements are chosen for further analysis. These may have resulted in hits, misses or near misses. At each
time interval in the simulation the aircraft and missile velocities are used to calculate a projected point of closest
approach. This is then compared with the simulated impact point. The difference is defined as the ▵d error and plots are
produced for hits, misses and near misses. Features of the ▵d error plots are investigated to gain insights into the
potential countermeasure capability. Finally, the analysis of the ▵d error plots is used to investigate the possibility of
replicating the factors in a simulation that produce a miss through a pre-emptive flare deployment.
This paper reports how objects in street scenes, such as pedestrians and cars, can be spotted, recognised and then
subsequently tracked in cluttered background using a cortex like vision approach. Unlike the conventional pixel based
machine vision, tracking is achieved by recognition of the target implemented in neuromorphic ways. In this preliminary
study the region of interest (ROI) of the image is spotted according to the salience and relevance of the scene and
subsequently target recognition and tracking of the object in the ROI have been performed using a mixture of feed
forward cortex like neuromorphic algorithms together with statistical classifier & tracker. Object recognitions for four
categories (bike, people, car & background) using only one set of ventral visual like features have achieved a max of
~70% accuracy and the present system is quite effective for tracking prominent objects relatively independent of
background types. The extension of the present achievement to improve the recognition accuracy as well as the
identification of occluded objects from a crowd formulates the next stage of work.
Emotional or physical stresses induce a surge of adrenaline in the blood stream under the command of the sympathetic
nerve system, which, cannot be suppressed by training. The onset of this alleviated level of adrenaline triggers a number
of physiological chain reactions in the body, such as dilation of pupil and an increased feed of blood to muscles etc. This
paper reports for the first time how Electro-Optics (EO) technologies such as hyperspectral [1,2] and thermal imaging[3]
methods can be used for the detection of stress remotely. Preliminary result using hyperspectral imaging technique has
shown a positive identification of stress through an elevation of haemoglobin oxygenation saturation level in the facial
region, and the effect is seen more prominently for the physical stressor than the emotional one. However, all results
presented so far in this work have been interpreted together with the base line information as the reference point, and that
really has limited the overall usefulness of the developing technology. The present result has highlighted this drawback
and it prompts for the need of a quantitative assessment of the oxygenation saturation and to correlate it directly with the
stress level as the top priority of the next stage of research.
This paper reports how Electro-Optics (EO) technologies such as thermal and hyperspectral [1-3] imaging methods can
be used for the detection of stress remotely. Emotional or physical stresses induce a surge of adrenaline in the blood
stream under the command of the sympathetic nerve system, which, cannot be suppressed by training. The onset of this
alleviated level of adrenaline triggers a number of physiological chain reactions in the body, such as dilation of pupil and
an increased feed of blood to muscles etc. The capture of physiological responses, specifically the increase of blood
volume to pupil, have been reported by Pavlidis's pioneer thermal imaging work [4-7] who has shown a remarkable
increase of skin temperature in the periorbital region at the onset of stress. Our data has shown that other areas such as
the forehead, neck and cheek also exhibit alleviated skin temperatures dependent on the types of stressors. Our result has
also observed very similar thermal patterns due to physical exercising, to the one that induced by other physical stressors,
apparently in contradiction to Pavlidis's work [8]. Furthermore, we have found patches of alleviated temperature regions
in the forehead forming patterns characteristic to the types of stressors, dependent on whether they are physical or
emotional in origin. These stress induced thermal patterns have been seen to be quite distinct to the one resulting from
having high fever.
This paper focuses on how an object, such as pedestrians, can be spotted, recognised and then subsequently tracked without prior information. Rather than using conventional sliding window techniques for target/object detection, a biological-like bottom-up neural system, similar to that of human's visual perception, has been adopted for selecting the region of interest (ROI) according to the salience and relevance of the scene. Subsequently, a cortex-like feed forward object recognition mechanism is employed for categorising objects in the ROI into pedestrian and non-pedestrian classes. The result is demonstrated using a video track and the flexibility and efficiency of this biological approach for surveillance application is commented.
Highly efficient target detection algorithms in hyperspectral remote sensing technology, particularly for the long range detection of very low observable objects which exhibit extremely small detection cross sections, are in great demand. This is more so for a near or real time application. This paper is concerned with global anomaly detections (GAD), and conventional methods to achieve better detection using multiple approach fusion (MAF), which fuses detection outputs from various detectors using either logical operators, or, via a model based estimation of the joint detection statistics from all detectors, is found to be not good enough. This work emphasises the need to integrate a more comprehensive background modelling into the GAD to develop a robust anomaly detector (AD). Then, the detection output from this detector is fused with other detectors via MAF for a further improvement of detection performance. The MUF2 algorithm is formulated exactly using this 2-level fusion mechanism, in which mixture modelling and spectral unmixing fusion have been employed. The significance of background modelling in GAD has been highlighted in this work using real data. The result has shown a factor of 2-5 reduction in detection performance when a very small amount of target pixels (~0.1%) is misclassified as background. This is because anomalies are defined with reference to a model of the background, and subsequently two new background classification techniques have been proposed in this work. The effectiveness of the MUF2 has been assessed using three representative data sets which contain various different types of targets, ranging from vehicles to small plates embedded in backgrounds with various degrees of homogeneity. The performance of MUF2 has been shown to be more superior than the conventional GAD frequently in orders of magnitude, regardless of the background homogeneity and target types. The current version of the MUF2 is run under Matlab and it takes ~2 minutes to process a 20K pixel imagery.
This work forms part of the research programme supported by the EMRS DTC established by the UK MOD.
In the literature of spectral unmixing (SU), particularly for remote sensing applications, there are claims that both geometric and statistical techniques using independency as cost functions1-4, are very applicable for analysing hyperspectral imagery. These claims are vigorously examined and verified in this paper, using sets of simulated and real data. The objective is to study how effective these two SU approaches are with respected to the modality and independency of the source data. The data sets are carefully designed such that only one parameter is varied at a time. The 'goodness' of the unmixed result is judged by using the well-known Amari index (AI), together with a 3D visualisation of the deduced simplex in eigenvector space. A total of seven different algorithms, of which one is geometric and the others are statistically independent based have been studied. Two of the statistical algorithms use non-negative constraint of modelling errors (NMF & NNICA) as cost functions and the other four employ the independent component analysis (ICA) principle to minimise mutual information (MI) as the objective function. The result has shown that, the ICA based statistical technique is very effective to find the correct endmember (EM) even for the highly intermixed imagery, provided that the sources are completely independent. Modality of the data source is found to only have a second order impact on the unmixing capabilities of ICA based algorithms. All ICA based algorithms are seen to fail when the MI of sources are above 1, and the NMF type of algorithms are found even more sensitive to the dependency of sources. Typical independency of species found in the natural environment is in the range of 15-30. This indicates that, conventional statistical ICA and matrix factorisation (MF) techniques, are really not very suitable for the spectral unmixing of hyperspectral (HSI) data. Future work is proposed to investigate the idea of a dependent component clustering technique, a fused geometric and statistical approach, and couple these with a modification of the conventional ICA based algorithms to model the independency of the mixing, rather than the sources. This work formulates part of the research programme supported by the EMRS DTC established by the UK MOD.
Most target detection algorithms employed in hyperspectral remote sensing rely on a measurable difference between the spectral signature of the target and background. Matched filter techniques which utilise a set of library spectra as filter for target detection are often found to be unsatisfactory because of material variability and atmospheric effects in the field data. The aim of this paper is to report an algorithm which extracts features directly from the scene to act as matched filters for target detection. Methods based upon spectral unmixing using geometric simplex volume maximisation (SVM) and independent component analysis (ICA) were employed to generate features of the scene. Target and background like features are then differentiated, and automatically selected, from the endmember set of the unmixed result according to their statistics. Anomalies are then detected from the selected endmember set and their corresponding spectral characteristics are subsequently extracted from the scene, serving as a bank of matched filters for detection. This method, given the acronym SAFED, has a number of advantages for target detection, compared to previous techniques which use orthogonal subspace of the background feature. This paper reports the detection capability of this new technique by using an example simulated hyperspectral scene. Similar results using hyperspectral military data show high detection accuracy with negligible false alarms. Further potential applications of this technique for false alarm rate (FAR) reduction via multiple approach fusion (MAF), and, as a means for thresholding the anomaly detection technique, are outlined.
This paper reports the result of a study on how atmospheric correction techniques (ACT) enhance target detection in hyperspectral remote sensing, using different sets of real data. Based on the data employed in this study, it has been shown that ACT can reduce the masking effect of the atmosphere and effectively improving spectral contrast. By using the standard Kmeans cluster based unsupervised classifier, it has been shown that the accuracy of the classification obtained from the atmospheric corrected data is almost an order of magnitude better than that achieved using the radiance data. This enhancement is entirely due to the improved separability of the classes in the atmospherically corrected data. Moreover, it has been found that intrinsic information concerning the nature of the imaged surface can be retrieved from the atmospherically corrected data. This has been done to within an error of 5% by using a model based atmospheric correction package ATCOR.
InAs/In(As,Sb) heterostructure LEDs are studied in forward (FB) and reverse (RB) bias where the phenomenon of 'negative luminescence' is seen for the first time in this materials system. Pseudomorphic 300K SQW LEDs, lattice matched to InAs and emitting at λ-5 micrometers and λ-8 micrometers , have internal conversion efficiencies of > 1.3 percent and > 0.83 percent respectively and maximum outputs in excess of 50 μW, in spite of an extremely low overall epilayer Sb content. Strain-relaxed InAs/In(As,Sb) SLS LEDs with AlSb barriers for electron confinement give 300K outputs in excess of 0.1mW at λ-4.2μm, approximately 3.5 times greater than control devices without the AlSb barrier. In RB the same SLS diodes exhibited efficient negative luminescence with output powers which increase with increasing device temperature to within 0.8 of the FB figures at 320 K.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.