PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7074, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces the theoretical development of a numerical method, named NeAREst, for solving non-negative
linear inverse problems, which arise often from physical or probabilistic models, especially, in image
estimation with limited and indirect measurements. The Richardson-Lucy (RL) iteration is omnipresent in
conventional methods that are based on probabilistic assumptions, arguments and techniques. Without resorting
to probabilistic assumptions, NeAREst retains many appealing properties of the RL iteration by utilizing it as
the substrate process and provides much needed mechanisms for acceleration as well as for selection of a target
solution when many admissible ones exist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main goals of the STAP-BOY program has been the implementation of a space-time adaptive processing (STAP) algorithm on graphics processing units (GPUs) with the goal of reducing the processing time. Within the context of GPU implementation, we have further developed algorithms that exploit data redundancy
inherent in particular STAP applications. Integration of these algorithms with GPU architecture is of primary importance for fast algorithmic processing times. STAP algorithms involve solving a linear system in which the transformation matrix is a covariance matrix. A standard method involves estimating a covariance matrix from a data matrix, computing its Cholesky factors by one of several methods, and then solving the system by substitution. Some STAP applications have redundancy in successive data matrices from which the covariance matrices are formed. For STAP applications in which a data matrix is updated with the addition of a new data row at
the bottom and the elimination of the oldest data in the top of the matrix, a sequence of data matrices have multiple rows in common. Two methods have been developed for exploiting this type of data redundancy when computing Cholesky factors. These two methods are referred to as
1) Fast QR factorizations of successive data matrices
2) Fast Cholesky factorizations of successive covariance matrices.
We have developed GPU implementations of these two methods. We show that these two algorithms exhibit reduced computational complexity when compared to benchmark algorithms that do not exploit data redundancy. More importantly, we show that when these algorithmic improvements are optimized for the GPU architecture,
the processing times of a GPU implementation of these matrix factorization algorithms may be greatly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the discrepancy that existed between the low arithmetic complexity of nonuniform Fast Fourier Transform (NUFFT) algorithms and high latency in practical use of NUFFTs with large data sets, especially, in multi-dimensional domains. The execution time of a NUFFT can be longer by a factor of two orders of magnitude
than what is expected by the arithmetic complexity. We examine the architectural factors in the latency, primarily on the non-even latency distribution in memory references across different levels in the memory hierarchy. We then introduce an effective approach to reducing the latency substantially by exploiting the geometric features in the sample translation stage and making memory references local. The restructured NUFFT algorithms render efficient computation in sequential as well as in parallel. Experimental results and improvements for radially encoded magnetic resonance image reconstruction are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an acceleration method, using both algorithmic and architectural means, for fast calculation of local correlation coefficients, which is a basic image-based information processing step for template or pattern matching, image registration, motion or change detection and estimation, compensation of changes, or compression of representations, among other information processing objectives. For real-time applications, the complexity in arithmetic operations as well as in programming and memory access latency had been a divisive issue between the so-called correction-based methods and the Fourier domain methods. In the presented method, the complexity in calculating local correlation coefficients is reduced via equivalent reformulation that leads to efficient array operations or enables the use of multi-dimensional fast Fourier transforms, without losing or sacrificing local and non-linear changes or characteristics. The computation time is further reduced by utilizing modern multi-core architectures, such as the Sony-Toshiba-IBM Cell processor, with high processing speed and low power consumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large gains have been made in the automation of moving object detection and tracking. As these technologies continue to mature, the size of the field of regard and the range of tracked objects continue to increase. The use of a pan-tilt-zoom (PTZ) camera enables a surveillance system to observe a nearly 360° field of regard and track objects over a wide range of distances. However, use of a PTZ camera also presents a number of challenges. The first challenge is to determine how to optimally control the pan, tilt, and zoom parameters of the camera. The second challenge is to detect moving objects in imagery whose orientation and spatial resolution may vary on a frame-by-frame basis. This paper does not address the first issue, it is assumed that the camera parameters are controlled by either an operator or by an automated control process. We address only the problem of how to detect moving objects in imagery whose orientation and spatial resolution may vary on a frame-by-frame basis.
We describe a system for detection and tracking of moving objects using a PTZ camera whose parameters are not under our control. A previously published background subtraction algorithm is extended to handle arbitrary camera rotation and zoom changes. This is accomplished by dynamically learning 360°, multi-resolution, background models of the scene. The background models are represented as mosaics on 3D cubes. Tracking of local scale-invariant distinctive image features allows the determination of the camera parameters and the mapping from the current image to the mosaic cube. We describe the real-time implementation of the system and evaluate its performance on a variety of PTZ camera data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spin-image surface matching is a technique for locating objects in a scene by processing three-dimensional surface information from sources such as light detection and ranging (LIDAR), structured light photography, and tomography. It is attractive for parallel processing on graphics processing units (GPUs) because the two main computational steps - matching pairs of spin-images by correlation, and matching pairs of points between model and scene - are explicitly
parallel.
By implementing these parallel computations on the GPU, as well as recasting serial portions of the algorithm into a parallel form and structuring the algorithm to limit data exchanges between host and GPU, this project achieved an overall speedup of 20 times or more compared to conventional serial processing.
A demonstration application has been developed that allows users to select among a set of models and scenes and then applies the spin-image surface matching algorithm to match the selected models to the scene. It also has several user interface controls for changing parameters. One new parameter is a geometric consistency ratio (GCR) that quantifies the matching performance and provides a measure for discarding low-quality matches. By toggling between GPU- and
host-based processing, the application demonstrates the speedup achieved with parallelization on the GPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some image processing applications require an image meet a quality metric before processing it. If an image is too degraded such that it is difficult or impossible to reconstruct, the input image may be discarded. In this paper, we present a metric that measures the relative sharpness with respect to a reference image frame. The reference frame may be a previous input image or an output frame from the system. The sharpness metric is based on analyzing edges. The assumption of this problem is that input images are similar to each other in terms of observation angle and time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and
tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and
instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy,
and may exhibit insuffcient spatial and temporal resolution. In particular, several external effects blur images.
Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution
(SR). The stability of these methods depends on having more than one image of the same frame. Differences
between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art
SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between
images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of
current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a
variational method that minimizes a regularized energy function with respect to the high resolution image and
blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution
image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good
SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described.
Comparative experiments on real data illustrate the robustness and utilization of both methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital super-resolution refers to computational techniques that exploit the generalized sampling theorem to
extend image resolution beyond the pixel spacing of the detector, but not beyond the optical limit (Nyquist
spatial frequency) of the lens. The approach to digital super-resolution taken by the PERIODIC multi-lenslet
camera project is to solve a forward model which describes the effects of sub-pixel shifts, optical blur, and
detector sampling as a product of matrix factors. The associated system matrix is often ill-conditioned, and
convergence of iterative methods to solve for the high-resolution image may be slow.
We investigate the use of pupil phase encoding in a multi-lenslet camera system as a means to physically
precondition and regularize the computational super-resolution problem. This is an integrated optical-digital
approach that has been previously demonstrated with cubic type and pseudo-random phase elements. Traditional
multi-frame phase diversity for imaging through atmospheric turbulence uses a known smooth phase perturbation
to help recover a time series of point spread functions corresponding to random phase errors. In the context of a
multi-lenslet camera system, a known pseudo-random or cubic phase error may be used to help recover an array
of unknown point spread functions corresponding to manufacturing and focus variations among the lenslets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analytical approximations of translational subpixel shifts in both signal and image registrations are derived by
setting the derivatives of a normalized cross correlation function to zero and solving them. Without the need of
iterative searching, this methods achieves a complexity of only O (mn), given an image size of m × n. Without
the need to upsample, computation memory is also saved. Tests using simulated signals and images show good
results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint estimation and detection for multi-sensor and multi-target algorithms are often hybrids of both analytical
and ad-hoc approaches at various levels. The intricacies of the resulting solution formulation often obscures
design intuition leaving many design choices to a largely trial and error based approach. Random Finite Set
Theory (RFST)1,2 is a formal generalization of classical probability theory to the random set domain. By treating
multi-target and multi-sensor jointly, RFST is able to provide a systematic theoretical framework for rigorous
mathematical analysis. Because of its set theory domain, RFST is able to model the randomness of missed
detection, sensor failure, target appearance and disappearance, clutter, jammer, ambiguous measurements, and
other practical artifacts within its probability framework. Furthermore, a rigorous statistical framework, the
Finite Set Statistics, has been developed for RFST that includes statistical operations such as: Maximum
Likelihood, Bayesian prediction-correction filter, sensor fusion, and even the Cramer-Rao Lower Bound (CRB).
In this paper we will apply RFST to jointly detect and locate a target in a power-aware wireless sensor network
setting. We will further derive the CRB using both classical and RFST approaches as verification. Then we will
use analytical results in conjunction with simulations to develop insights for choosing the design parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor networks have been shown to be useful in diverse applications. One of the important applications is the collaborative detection based on multiple sensors to increase the detection performance. To exploit the spectrum vacancies in cognitive radios, we consider the collaborative spectrum sensing by sensor networks in the likelihood ratio test (LRT) frameworks. In the LRT, the sensors make individual decisions. These individual decisions are then transmitted to the fusion center to make the final decision, which provides better detection accuracy than the individual sensor decisions. We provide the lowered-bounded probability of detection (LBPD) criterion as an alternative criterion to the conventional Neyman-Pearson (NP) criterion. In the LBPD criterion, the detector pursues the minimization of the probability of false alarm while maintaining the probability of detection above the pre-defined value. In cognitive radios, the LBPD criterion limits the probabilities of channel conflicts to the primary users. Under the NP and LBPD criteria, we
provide explicit algorithms to solve the LRT fusion rules, the probability of false alarm, and the probability of detection
for the fusion center. The fusion rules generated by the algorithms are optimal under the specified criteria. In the spectrum sensing, the fading channels influence the detection accuracies. We investigate the single-sensor detection and collaborative detections of multiple sensors under various fading channels, and derive testing statistics of the LRT with known fading statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and
stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such
as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view
and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an implementation of a series of complex-valued operators defined in Ref. 2: Complex-Multiply-Add (CMA), Complex-Sum of Squares (CSS), and Complex-Sum of Products (CSP). The preceding
paper2 defined these operators at an algorithmic level, for which we now provide actual hardware performance
metrics through detailed discussion of their implementation for an Altera Stratix II17 FPGA device. In addition
to discussing these designs in particular, we present our methodology and choice of tools to create a pragmatic
generator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast multipliers consist of an array of AND gates, a bit reduction stage, and a final two-operand addition. There are
three widely recognized types of fast multipliers: Wallace, Dadda, and reduced area. These multipliers are distinguished
by their techniques for the bit reduction stage; however, little research has been invested in the optimization of the final
addition stage. This paper presents an approach for investigating the optimal final adder structure for the three types of
fast multipliers. Multiple designs are characterized using the Virginia Tech 0.25 μm TSMC standard cell library and
Synopsys Design Vision to determine area, power consumption, and critical delay compared with a traditional carry
look ahead adder. A figure of merit that includes these measurements is used to compare the performance of the
proposed adders. Although this analysis neglects loading, interconnect, and several other parameters that are needed to
accurately model the multipliers in a modern VLSI process technology, the results obtained using the figure of merit
suggest that the final adder in each of the three fast multipliers can indeed be optimized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geometric computations can fail because of inconsistencies due to
floating-point inaccuracy.
For instance, the computed intersection point between two curves does not
lie on the curves:
it is unavoidable when the intersection point coordinates are non rational,
and thus not representable using floating-point arithmetic.
A popular heuristic approach tests equalities and nullities up to a
tolerance ε.
But transitivity of equality is lost: we can have A approx B and B approx C,
but A not approx C (where A approx B means ||A - B|| < ε for
A,B two floating-point values).
Interval arithmetic is another, self-validated, alternative; the difficulty
is to limit the swell of the width of intervals
with computations. Unfortunately interval arithmetic cannot
decide equality nor nullity,
even in cases where it is decidable by other means.
A new approach,
developed in this paper,
consists in modifying the geometric problems and algorithms, to
account for the undecidability of
the equality test and unavoidable inaccuracy. In particular, all curves come
with a non-zero thickness,
so two curves (generically) cut in a region with non-zero area, an inner and
outer representation of which is computable.
This last approach no more assumes that an equality or nullity test is
available.
The question which arises is: which geometric problems can still be solved
with this last approach, and which cannot?
This paper begins with the description of some cases where every known
arithmetic fails in practice.
Then, for each arithmetic, some properties of the problems they can
solve are given.
We end this work by proposing the bases of a new approach which aims
to fulfill the geometric computations requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a hardware-computed estimate of the roundoff error in floating-point computations. The
estimate is computed concurrently with the execution of the program and gives an estimation of the
accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result
is low. We aim for a simple implementation and a negligible effect on the execution of the program.
Large errors due to roundoff occur in some computations, producing inaccurate results. However,
usually these large errors occur only for some values of the data, so that the result is accurate in most
executions. As a consequence, the computation of an estimate of the error during execution would allow
the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is
not available, the solution is to perform an error analysis. However, this analysis is complex or impossible
in some cases, and it produces a worst-case error bound.
The proposed approach is to keep with each value an estimate of its error, which is computed when
the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus
the generated error due to roundoff during the operation. Since roundoff errors are signed values (when
rounding to nearest is used), the computation of the error allows for compensation when errors are of
different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy
problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate
should be large when the error is large and small when the error is small. Since this cannot be achieved
always with an inexact estimate, we aim at assuring the first property always, and the second most of
the time. As a minimum, we aim to produce a qualitative indication of the error.
To indicate the accuracy of the value, the most appropriate type of error is the relative error.
However, this type has some anomalies that make it difficult to use. We propose a scaled absolute error,
whose value is close to the relative error but does not have these anomalies.
The main cost issue might be the additional storage and the narrow datapath required for the
estimate computation. We evaluate our proposal and compare it with other alternatives. We conclude
that the proposed approach might be beneficial.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the application of phase space distributions, such as the Wigner distribution,
to study wave propagation in a dispersive medium. We show how the classical and quantum
methods for particle motion can be applied to the solution of the wave equation. We derive an
explicit evolution equation for the Wigner distribution and discuss a number of methods to solve
it. We also discuss the application of phase space distributions to the evolution of noise fields in a
dispersive medium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables).
Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quantification of synchrony is important for the study of
large-scale interactions in the brain. Current synchrony measures
depend on the energy of the signals rather than the phase, and
cannot be reliably used as measures of neural synchrony. Moreover,
the current methods are insufficient since they are limited to
pairs of signals. These approaches cannot quantify the synchrony
across a group of electrodes and over time-varying frequency
regions. In this paper, we propose two new measures for
quantifying the synchrony between both pairs and groups of
electrodes using time-frequency analysis. The proposed measures
are applied to electroencephalogram (EEG) data to quantify neural
synchrony.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modulation filtering is a technique for filtering slowly-varying envelopes of frequency subbands of a non-stationary signal, ideally without affecting the signal's phase and fine-structure. Coherent modulation filtering is a potentially more effective subtype of such techniques where subband envelopes are determined through demodulation of the subband signal with a coherently detected subband carrier. In this paper we propose a coherent modulation filtering technique based on detecting the instantaneous frequency of a subband from its time-frequency representation. We show that coherent modulation filtering imposes a new bandlimiting constraint on the modulation product plus the ability to recover arbitrarily chosen envelopes and carriers from their modulation product. We show that a carrier estimate based on the time-varying spectral center-of-gravity satisfies the bandlimiting condition as well as Loughlin's previously
derived bandlimiting constraint on the instantaneous frequency of carrier. These bandwidth constraints lead to effective and distortion-free modulation filters, offering new approaches for potential signal modification. The spectral center-of-gravity does not satisfy the condition on arbitrary recovery, however, which somewhat limits the flexibility of coherent modulation filtering. Demonstrations are provided with speech signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurement of EEG event-related potential (ERP) data has been most commonly undertaken in the time-domain,
which can be complicated to interpret when separable activity overlaps in time. When the overlapping activity has
distinct frequency characteristics, however, time-frequency (TF) signal processing techniques can be useful. The current
report utilized ERP data from a cognitive task producing typical feedback-related negativity (FRN) and P300 ERP
components which overlap in time. TF transforms were computed using the binomial reduced interference distribution
(RID), and the resulting TF activity was then characterized using principal components analysis (PCA). Consistent with
previous work, results indicate that the FRN was more related to theta activity (3-7 Hz) and P300 more to delta activity
(below 3 Hz). At the same time, both time-domain measures were shown to be mixtures of TF theta and delta activity,
highlighting the difficulties with overlapping activity. The TF theta and delta measures, on the other hand, were largely
independent from each other, but also independently indexed the feedback stimulus parameters investigated. Results
support the view that TF decomposition can greatly improve separation of overlapping EEG/ERP activity relevant to
cognitive models of performance monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two studies of brain networks, performed on interictal intracranial EEGs recorded during the presurgical evaluation of
patients with epilepsy, are presented in this report. In the first we examine pairwise relationships between pre-defined
brain regions in 12 patients, 6 with medial temporal onset of seizures and 6 with frontal and parietal onset of seizures.
We demonstrate that differences, in pairwise relationships between brain regions, allow a distinction of these two groups
of patients. In the second study we evaluate short, mid, and long-distance brain connectivity as a function of distance to
the seizure onset area in another 2 patients. We demonstrate that the measures of brain connectivity distinguish between
brain areas which are close to and far from the seizure onset area. The results of the two studies may help both define
large scale brain networks involved in the generation of seizures, and localize the area of seizure onset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are presenting a method for the improvement of small scale text independent automatic speaker identification
systems. A small scale identification system is a system with a relatively small number of enrolled speakers
(20 or less). The proposed improvement is obtained from adaptive frequency warping. Most modern speaker
identification systems employ a short-time speech feature extraction method that relies on frequency warped
cepstral representations. One of the most popular frequency warping types is based on the mel-scale. While
the mel-scale provides a substantial boost in recognition performance for large scale systems, it is suboptimal
for small scale systems. With experiments we have shown that our methodology has the potential to reduce the
error rate of small scale systems by 24% over the mel-scale approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Presented is a super resolution method for estimating the relative time delay of transmitted and received signals.
The method is applied to the problem of accurately estimating both delay and Doppler effects from transmitted
and received signals where the transmitter and receiver are moving with respect to each other. Unlike conventional
methods based on the cross-ambiguity function (CAF), we use only the crosscorrelation function and estimate
only delay with enough accuracy that accurate scale estimates may be obtained from the delay function. While
CAF processes are two dimensional and are based on a linear approximation of the Doppler process, the method
presented here represents a one dimensional solution based on the exact model of the Doppler process. While
we address the problem in the context of resolving both delay and Doppler, the method may be used to obtain
super resolution estimates of correlation delay in the case that delay is constant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical time-frequency distributions represent time- and frequency-localized energy. However, it is not an
easy task to analyze multiple signals that have been simultaneously collected. In this paper, a new concept of
non-parametric detection and classification of the signals is proposed using the mutual information measures in the
time-frequency domain. The time-frequency-based self and mutual information is defined in terms of cross time-frequency
distribution. Based on the time-frequency mutual information theory, this paper presents applications
of the proposed technique to real-world vibration data. The baseline and misaligned experimental settings are
quantitatively distinguished by the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Entropy is used as a number indicating the amount of uncertainty or information of a source. That means that noise can
not be distinguished from information by simply measuring entropy. Nevertheless, the Renyi entropy can be used to
calculate the entropy in a pixel-wise basis. When the source of information is a digital image, a value of entropy can be
assigned to each pixel of the image. Consequently, entropy histograms of images can be obtained. Entropy histograms
give information about the image information contents in a similar way as image histograms give information about the
distribution of gray-levels. Hence, histograms of entropy can be used to quantify differences in the information contents
of images. The pixel-wise entropy of digital images has been calculated through the use of a spatial/spatial-frequency
distribution. The generalized Renyi entropy and a normalized windowed pseudo-Wigner distribution (PWD) have been
selected to obtain particular pixel-wise entropy values. In this way, a histogram of entropy values has been derived.
In this paper, first we present a review on the use of the Renyi entropy as a measure of the information contents extracted
from a time-frequency representation. Second, a particular measure based on a high-order Renyi entropy distribution has
been analyzed. Examples are presented in the areas of image fusion and blind image quality assessment. Experiments on
real data in different applications domains illustrate the robustness and utilization of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For wide-band transmission, geolocation modeling using the wide-band cross-ambiguity function (WBCAF) is preferable
to conventional CAF modeling, which assumes that the transmitted signal is essentially a sinusoid. We compare
the accuracy of two super-resolution techniques for joint estimation of the time-scale (TS) and TDOA
parameters in the WBCAF geolocation model. Assuming a complex-valued signal representation, both techniques
exploit the fact that the maximum value of the magnitude of the WBCAF is attained when the WBCAF is real-valued.
The first technique enhances a known joint estimation method based on sinc interpolation and 2-D Newton root-finding
by (1) extending the original algorithm to handle complex-valued signals, and (2) reformulating the original algorithm
to estimate the difference in radial velocities of the receivers (DV) rather than time scale, which avoids machine
precision problems encountered with the original method. The second technique makes a rough estimate of TDOA on
the sampling lattice by peak-picking the real part of the cross-correlation function of the received signals. Then, by
interpolating the phase of the WBCAF, it obtains a root of the phase in the vicinity of this correlation peak, which
provides a highly accurate TDOA estimate. TDOA estimates found in this way are differentiated in time to obtain DV
estimates. We evaluate both super-resolution techniques applied to simulated received electromagnetic signals which
are linear combinations of complex sinusoids having randomly generated amplitudes, phases, TS, and TDOA. Over a
wide SNR range, TDOA estimates found with the enhanced sinc/Newton technique are at least an order of magnitude
more accurate than those found with conventional CAF, and the phase interpolated TDOA estimates are 3-4 times
more accurate than those found with the enhanced sinc/Newton technique. In the 0-10 dB SNR range, TS estimates
found with the enhanced sinc/Newton technique are a little more accurate than those found with phase interpolation.
Moreover, the TS estimate errors observed with both super-resolution techniques are too small for a CAF-type grid
search to realize in comparable time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop an automatic smoothing procedure for an estimate of the spectral density of a random process. The procedure is based on smoothing the periodogram with variable bandwidth and a spline interpolation. Effective varying bandwidth is obtained by approximating the log periodogram with a step function whose positions of level changes are determined using a dynamic programming technique. We show that the step function can be obtained by minimizing the cost function D(C||μk) for a given K. The number of partitions K also can be chosen by minimizing another cost function L(K). Some numerical examples show that the resulting estimates are shown to be good representations of the true spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.