PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7701, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ship detection from satellite imagery is something that has great utility in various communities. Knowing where
ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a
difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken
in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the
grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a
version in which we use a rank-order selection for the dilation and erosion parts of the transform, instead of the
standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs
and provides a method for tuning the algorithm's performance for particular detection problems. We describe
our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the
use of this approach for real ship detection problems with panchromatic satellite imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important aspect of spectral image analysis is identification of materials present in the object or scene being
imaged. Enabling technologies include image enhancement, segmentation and spectral trace recovery. Since
multi-spectral or hyperspectral imagery is generally low resolution, it is possible for pixels in the image to
contain several materials. Also, noise and blur can present significant data analysis problems. In this paper,
we first describe a variational fuzzy segmentation model coupled with a denoising/deblurring model for material
identification. A statistical moving average method for segmentation is also described. These new approaches
are then tested and compared on hyperspectral images associated with space object material identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In video surveillance, automatic methods for scene understanding and activity modeling can exploit the high redundancy
of object trajectories observed over a long period of time. The goal of scene understanding is to generate a semantic
model of the scene describing the patterns of normal activities. We are proposing to boost the performances of a real
time object tracker in terms of object classification based on the accumulation of statistics over time. Based on the object
shape, an initial three class object classification (Vehicle, Pedestrian and Other) is performed by the tracker. This initial
labeling is usually very noisy because of object occlusions/merging and the eventual presence of shadows. The proposed
scene activity modeling approach is derived from Makris and Ellis algorithm where the scene is described in terms of
clusters of similar trajectories (called routes). The original envelope based model is replaced by a simpler statistical
model around each route's node. The resulting scene activity model is then used to improve object classification based on
the statistics observed within the node population of each route. Finally, the Dempster-Shafer theory is used to fuse
multiple evidence sources and compute an improved object classification map. In addition, we investigate the automatic
detection of problematic image areas that are the source of poor quality trajectories (object reflections in buildings, trees,
flags, etc.). The algorithm was extensively tested using a live camera in a urban environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving object detection in urban scenes is important for the guidance of autonomous vehicles, robot navigation, and
monitoring. In this paper moving objects are automatically detected using three sequential frames and tracked over a longer
period. To this extend we modify the plane+parallax, fundamental matrix, and trifocal tensor algorithms to operate on three
sequential frames automatically, and test their ability to detect moving objects in challenging urban scenes. Frame-to-frame
correspondences are established with the use of SIFT keys. The keys that are consistently matched over three frames are
used by the algorithms to distinguish between static objects and moving objects. The tracking of keys for the detected
moving objects increases their reliability over time, which is quantified by our results. To evaluate the three different
algorithms, we manually segment the moving objects in real world data and report the fraction of true positives versus
false positives. Results show that the plane+parallax method performs very well on our datasets and we prove that our
modification to this method outperforms the original method. The proposed combination of the advanced plane+parallax
method with the trifocal tensor method improves on the moving object detection and their tracking for one of the four video
sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military Operations in Urban Terrain (MOUT) require the capability to perceive and to analyse the situation around a
patrol in order to recognize potential threats. As in MOUT scenarios threats usually arise from humans one important
task is the robust detection of humans.
Detection of humans in MOUT by image processing systems can be very challenging, e.g., due to complex outdoor
scenes where humans have a weak contrast against the background or are partially occluded. Porikli et al. introduced
covariance descriptors and showed their usefulness for human detection in complex scenes. However, these descriptors
do not lie on a vector space and so well-known machine learning techniques need to be adapted to train covariance
descriptor classifiers. We present a novel approach based on manifold learning that simplifies the classification of
covariance descriptors.
In this paper, we apply this approach for detecting humans. We describe our human detection method and evaluate the
detector on benchmark data sets generated from real-world image sequences captured during MOUT exercises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Baggage abandoned in public places can pose a serious security threat. In this paper a two-stage approach
that works on video sequences captured by a single immovable CCTV camera is presented. At first, foreground
objects are segregated from static background objects using brightness and chromaticity distortion parameters
estimated in the RGB colour space. The algorithm then locks on to binary blobs that are static and of 'bag' sizes;
the size constraints used in the scheme are chosen based on empirical data. Parts of the background frame and
current frames covered by a locked mask are then tracked using a 1-D (unwrapped) pattern generated using a
bi-variate frequency distribution in the rg chromaticity space. Another approach that uses edge maps instead of
patterns generated using the fragile colour information is discussed. In this approach the pixels that are part of
an edge are marked using a novel scheme that utilizes four 1-D Laplacian kernels; tracking is done by calculating
the total entropy in the intensity images in the sections encompassed by the binary edge maps. This makes the
process broadly illumination invariant. Both the algorithms have been tested on the iLIDS dataset (produced
by the Home Office Scientific Development Branch in partnership with Security Service, United Kingdom) and
the results obtained are encouraging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical coherence tomography (OCT) is an interferometric, noninvasive and non contact imaging technique that
generates images of biological tissues at micrometer scale resolution. Images obtained from the OCT process are
often noisy and of low visual contrast level. This work focuses on improving the visual contrast of OCT images
using digital enhancement and fusion techniques. Since OCT images are often corrupted with noise, our first
approach is to use the most effective noise reduction algorithm. This process is followed by a series of digital
enhancement techniques that are suitable to enhance the visual contrast of the OCT images. We also investigate any
gain in visual contrast if combined enhancement is employed. In the image fusion methods, images taken at different
depths are fused together using discrete wavelet transform (DWT) and logical fusion algorithms. We answer the
question of it is more efficient to enhance images before or after fusion. This work concludes by suggesting future
work needed to complement the current one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimodal biometric is an emerging area of research that aims at increasing the reliability of biometric systems through
utilizing more than one biometric in decision-making process. In this work, we develop a multi-algorithm based
multimodal biometric system utilizing face and ear features and rank and decision fusion approach. We use multilayer
perceptron network and fisherimage approaches for individual face and ear recognition. After face and ear recognition,
we integrate the results of the two face matchers using rank level fusion approach. We experiment with highest rank
method, Borda count method, logistic regression method and Markov chain method of rank level fusion approach. Due
to the better recognition performance we employ Markov chain approach to combine face decisions. Similarly, we get
combined ear decision. These two decisions are combined for final identification decision. We try with 'AND'/'OR'
rule, majority voting rule and weighted majority voting rule of decision fusion approach. From the experiment results,
we observed that weighted majority voting rule works better than any other decision fusion approaches and hence, we
incorporate this fusion approach for the final identification decision. The final results indicate that using multi algorithm
based can certainly improve the recognition performance of multibiometric systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion technology is becoming increasingly used within military systems. However, the migration of the
technology to non-defence applications has been limited, both in terms of functionality and processing performance. In
this paper, the development of a low-cost automatic registration and adaptive image fusion system is described. In order
to fully exploit commercially available processor hardware, an alternative registration and image fusion approach has
been developed and the results of this are presented. Additionally, the software design offers interface flexibility and user
programmability and these features are illustrated through a number of different applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern image enhancement techniques have been shown to be effective in improving the quality of imagery. However,
the computational requirements of applying such algorithms to streams of video in real-time often cannot be satisfied by
standard microprocessor-based systems. While a scaled solution involving clusters of microprocessors may provide the
necessary arithmetic capacity, deployment is limited to data-center scenarios. What is needed is a way to perform these
techniques in real time on embedded platforms. A new paradigm of computing utilizing special-purpose commodity
hardware including Field-Programmable Gate Arrays (FPGAs) and Graphics Processing Units (GPU) has recently
emerged as an alternative to parallel computing using clusters of traditional CPUs. Recent research has shown that for
many applications, such as image processing techniques requiring intense computations and large memory spaces, these
hardware platforms significantly outperform microprocessors. Furthermore, while microprocessor technology has begun
to stagnate, GPUs and FPGAs have continued to improve exponentially. FPGAs, flexible and powerful, are best
targeted at embedded, low-power systems and specific applications. GPUs, cheap and readily available, are available to
most users through their standard desktop machines. Additionally, as fabrication scale continues to shrink, heat and
power consumption issues typically limiting GPU deployment to high-end desktop workstations are becoming less of a
factor. The ability to include these devices in embedded environments opens up entire new application domains. In this
paper, we investigate two state-of-the-art image processing techniques, super-resolution and the average-bispectrum
speckle method, and compare FPGA and GPU implementations in terms of performance, development effort, cost,
deployment options, and platform flexibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radial Basis Function neural networks (RBFNN) have been used for tracking precipitation in weather imagery.
Techniques presented in the literature used RBFNN to model precipitation as a combination of localized envelopes
which evolve over time. A separate RBFNN was used to predict future values of the evolving envelope parameters
considering each parameter as a time series. Prediction of envelope parameters is equivalent to forecasting the
associated weather events. Recently, the authors proposed an alternative RBFNN-based approach for modeling
precipitation in weather imagery in a computationally efficient manner. However, the event prediction stage
was not investigated, and thus any possible trade-off between efficiency and forecasting effectiveness was not
examined. In order to facilitate such a test, an appropriate prediction technique is needed. In this work, an
RBFNN series prediction scheme explores the dependence of envelope parameters on each other. Although
different approaches can be employed for training the RBFNN predictor, a computationally efficient subset
selection method is adopted from past work, and adjusted to support parameter dependence. Simulations are
presented to illustrate that simultaneous prediction of the precipitation event parameters may be advantageous.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a differential interpolation method for correcting sinusoidally scanned distorted images. In our
approach, the scanned image is processed by a line-by-line interpolation technique based on differentiation. As a natural
consequence of the method, the image can be divided into four domains/zones perpendicular to the scan direction. The
domain boundaries are set by our interpolation algorithm. Each domain is corrected using its specific algorithm;
corrected domains are reassembled to construct the corrected image. The implementation of this algorithm shows that,
for our 100 pixel wide test image, it is possible to retrieve at least 97.45% of the original image, as measured by the
recovered energy, which is superior to the established methods we have applied to this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speckle imaging techniques make it possible to do high-resolution imaging through the turbulent atmosphere by collecting
and processing a large number of short-exposure frames, each of which effectively freezes the atmosphere. In severe seeing
conditions, when the characteristic scale of atmospheric fluctuations is much smaller than the diameter of the telescope,
the reconstructed image is dominated by "turbulence noise" caused by redundant baselines in the pupil. I describe a
generalization of aperture masking interferometry that dramatically improves imaging performance in this regime. The
approach is to partition the aperture into annuli, form the bispectra of the focal plane images formed from each annulus,
and recombine them into a synthesized bispectrum from which the object may be retrieved. This may be implemented
using multiple cameras and special mirrors, or with a single camera and a suitable pupil phase mask. I report results from
simulations as well as experimental results using telescopes at the Air Force Research Lab's Maui Space Surveillance Site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super-resolution (SR) reconstruction refers to the process of combining a sequence of under-sampled and degraded
low-resolution (LR) images in order to produce a single high-resolution (HR) image. The LR input images are
assumed to have slightly different views of the same scene. In the broad sense, super-resolution techniques
attempt to improve spatial resolution by incorporating into the final HR result the additional new details that
are revealed in each LR image. This can be the case of the images captured from unmanned aerial vehicles
(UAVs). These images must have sufficient overlap to produce an HR image. Additionally, information about
the UAV altitude and attitude-rotational parameters yaw, pitch, and roll-that allows us to relate the different
images to a common coordinate system is also needed. This extra information can be used to get an SR image
of the overlapping area common to all these images. In this paper, we define a metric to determine if there is
enough overlap between a set of frames that would allow SR reconstruction. When this overlap exists, we use
the set of registered data to reconstruct an SR image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have
been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic,
robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to
computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered
semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical
layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges
the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through
utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and
used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano-
Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on
low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region
dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local
multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient
neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been
conducted within the maritime image environment where the segmented layered semantic objects include the basic level
objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the
proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions
with contextual topological relationships.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New foundational ideas are used to define a novel approach to generic visual pattern recognition. These ideas
proceed from the starting point of the intrinsic equivalence of noise reduction and pattern recognition when
noise reduction is taken to its theoretical limit of explicit matched filtering. This led us to think of the logical
extension of sparse coding using basis function transforms for both de-noising and pattern recognition to the full
pattern specificity of a lexicon of matched filter pattern templates. A key hypothesis is that such a lexicon can
be constructed and is, in fact, a generic visual alphabet of spatial vision. Hence it provides a tractable solution
for the design of a generic pattern recognition engine. Here we present the key scientific ideas, the basic design
principles which emerge from these ideas, and a preliminary design of the Spatial Vision Tree (SVT). The latter
is based upon a cryptographic approach whereby we measure a large aggregate estimate of the frequency of
occurrence (FOO) for each pattern. These distributions are employed together with Hamming distance criteria
to design a two-tier tree. Then using information theory, these same FOO distributions are used to define a
precise method for pattern representation. Finally the experimental performance of the preliminary SVT on
computer generated test images and complex natural images is assessed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of imaging systems continues to increase and diversify as a result of the ability to measure, analyze,
and improve the limiting aspects of imaging systems. Bloom is one such limiting aspect. In an image, bright regions of
light noticeably bleed into darker regions of light causing the phenomenon referred to as "bloom". The occurrence of
bloom is theoretically a direct consequence of the diffraction pattern of an aperture. In practice, bloom is caused both
optically by non-ideal lenses and electronically by the bleeding of overly saturated pixels. In analyzing optical
instruments, circular apertures are of particular interest since their theoretical diffraction patterns are well known,
consisting of an Airy Disk and alternating concentric dark and bright rings. In the image formed by a circular aperture,
relative intensity can be observed by dividing all pixel intensity values by the peak pixel intensity. Bloom cut off
percentages may be analyzed from their relative distances to the threshold peak intensity. Instrument performance may
thus be measured against theoretical Airy function values or by comparing different images produced by the same
instrument under similar conditions. Additionally, polynomials of single digit orders may be accurately fit to the pixel
array data. By approximating the data with polynomials, pertinent information on derivatives, local slopes, and integrals
may be analytically as well as numerically obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acquiring iris or face images of moving subjects at larger distances using a flash to prevent the motion blur quickly runs
into eye safety concerns as the acquisition distance is increased. For that reason the flutter shutter method recently
proposed by Raskar et al.has generated considerable interest in the biometrics community. The paper concerns the design
of shutter sequences that produce the best images. The number of possible sequences grows exponentially in both the
subject' s motion velocity and desired exposure value, with their majority being useless. Because the exact solution leads
to an intractable mixed integer programming problem, we propose an approximate solution based on pre - screening the
sequences according to the distribution of roots in their Fourier transform. A very fast algorithm utilizing the Jury' s
criterion allows the testing to be done without explicitly computing the roots, making the approach practical for moderately
long sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Online learning is an effective incremental learning method. Compared with the conventional off-line learning method,
online learning updates the original classifier continuously with new samples and improves its performance. In this
paper, we propose a novel online learning framework for head detection in video sequences. At first, an off-line classifier
is trained with a few labeled samples. And it was used to object detection in video sequences. Based on online boosting
algorithm, the detected objects will be used to train the classifier as new samples. Instead of using another detection
algorithm to label the new sample automatically like other online learning framework, we ensure the correct label from
tracking. Furthermore, the weights of new samples can be obtained from tracking directly. Thus the training speed of the
classifier can be improved. Experimental results on two video datasets are provided to show the efficient and high
detection rate of the framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Online boosting algorithm has been used in many vision-related applications, such as object detection. However, in order
to obtain good detection result, combining a large number of weak classifiers into a strong classifier is required. And
those weak classifiers must be updated and improved online. So the training and detection speed will be reduced
inevitably. This paper proposes a novel online boosting based learning method, called self-learning cascade classifier.
Cascade decision strategy is integrated with the online boosting procedure. The resulting system contains enough number
of weak classifiers while keeping computation cost low. The cascade structure is learned and updated online. And the
structure complexity can be increased adaptively when detection task is more difficult. Moreover, most of new samples
are labeled by tracking automatically. This can greatly reduce the effort by labeler. We present experimental results that
demonstrate the efficient and high detection rate of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple method for enhancing low-contrast gray image is proposed. The method decomposes an image into the layers
of illuminance and reflectance, by using an edge-preserving low-pass filter, and separately enhances each layer. Finally,
the enhanced results of these two layers are recomposed as the output. Experiment shows the good results on real
images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pores are naturally occurring entities in bone. Changes in pore size and number are often associated
with diseases such as Osteoporosis and even microgravity during spaceflight. Studying bone
perforations may yield great insight into bone's material properties, including bone density and may
contribute to identifying therapies to halt or potentially reverse bone loss. Current technologies used
in this field include nuclear magnetic resonance, micro-computed tomography and the field emission
scanning electron microscope (FE-SEM) 2, 5. However, limitations in each method limit further
advancement. The objective of this study was to assess the effectiveness of using a new generation
of analytical instruments, the TM-1000 tabletop, SEM with back-scatter electron (BSE) detector, to
analyze cortical bone porosities. Hind limb unloaded and age-based controlled mouse femurs were
extracted and tested in vitro for changes in pores on the periosteal surface. An important advantage
of using the tabletop is the simplified sample preparation that excludes extra coatings, dehydration
and fixation steps that are otherwise required for conventional SEM. For quantitative data, pores
were treated as particles in order to use an analyze particles feature in the NIH ImageJ software.
Several image-processing techniques for background smoothing, thresholding and filtering were
employed to produce a binary image suitable for particle analysis. It was hypothesized that the
unloaded bones would show an increase in pore area, as the lack of mechanical loading would affect
bone-remodeling processes taking place in and around pores. Preliminary results suggest only a
slight different in frequency but not in size of pores between unloaded and control femurs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on a new technique for unconstrained license plate detection in a surveillance context. The
proposed algorithm quickly finds license plates by performing the following steps. The image is first preprocessed
to extract the edges; opening with linear structuring elements ensures that plate sides are enhanced.
Multiple scans using the Hausdorff distance are made through the vertical edge map with binary templates
representing a pair of vertical lines (with varying gap to account for unknown plate size), so they efficiently
pinpoint areas in the image where plates may be located. Inside those areas, the Hausdorff is used again, this
time over the gradient image and with a family of templates corresponding to rectangles which have been
subjected to geometric transformations (to account for perspective effects). The end result is a set of plate
location candidates, each associated to a confidence level that is a function of the quality of match between
the image and the template. An additional criterion based on the symmetry of plate shapes also supplies
complementary information about each hypothesis that allows rejection of many bad candidates. Examples
are given to show the performance of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the supercritical phase, pure fluids have great potential for industrial applications and are increasingly used by
industry as nonpolluting solvents of organic materials and media for high yield chemical reactions. The experimental
data were recorded in microgravity for sulfur hexafluoride (SF6) and on Earth for density-matching binary mixture of
methanol and partially deuterated cyclohexane (CC*-Me). We used small angle light scattering experiments to
investigate fluctuations in SF6 near critical point and in density-matched binary mixture CC*-Me in the absence of
convective flows. For binary mixture, we used three different filtering methods: bright filed (BF - no filter), phase
contrast (PC - quarter wave plate at focal point) and dark field (DF - small opaque object at focal point). The power
spectrum of scattered light contains information about local inhomogeneities encountered by light traveling through the
sample cell unit (SCU). We found that the spatial correlations revealed by Fourier transforms follow power laws both for
SF6 in microgravity and binary mixture on Earth. This is an indication of the universality of fluctuation mechanisms.
Temporal correlations of fluctuations were investigated using the correlation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To index, search, browse and retrieve relevant material, indexes describing the video content are required. Here, a
new and fast strategy which allows detecting abrupt and gradual transitions is proposed. A pixel-based analysis is
applied to detect abrupt transitions and, in parallel, an edge-based analysis is used to detect gradual transitions.
Both analysis are reinforced with a motion analysis in a second step, which significantly simplifies the threshold
selection problem while preserving the computational requirements. The main advantage of the proposed system
is its ability to work in real time and the experimental results show high recall and precision values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we use the aspect ratio of a vessel for its recognition and classification with overhead images. For aspect
ratio extraction, a morphology-based local adaptive threshold method of detection has been applied for a more accurate
outline. With Radon transforms on the minimum bounding rectangle regions of those extracted outlines, central axis of
each vessel can be got. Thus, the aspect ratio of a vessel could be accurately calculated through scanning the boundary
contours of every target by lines along and perpendicular to the direction of central axis. If remote sensing information is
also considered, such as the height and pitching angle of shooting, the real values of a vessel can also be calculated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.