We consider the problem of signature waveform design for code division medium-access-control (MAC) of wireless
sensor networks (WSN). In contract to conventional randomly chosen orthogonal codes, an adaptive signature
design strategy is developed under the maximum pre-detection SINR (signal to interference plus noise ratio)
criterion. The proposed algorithm utilizes slowest descent cords of the optimization surface to move toward the
optimum solution and exhibits, upon eigenvector decomposition, linear computational complexity with respect
to signature length. Numerical and simulation studies demonstrate the performance of the proposed method
and offer comparisons with conventional signature code sets.
We use time-frequency distributions to define local stationarity of a random process. We argue that local stationarity is achieved when the Wigner spectrum is approximately factorable. We show that when that is the case the autocorrelation function is the one considered by Silverman in 1957. Other time-frequency representations are also considered.
Clouds have a nonstationary nature in that their local spectrum changes with position. We model this nonstationarity by extending the classical 1/fγ type spectrum. We make γ a function of position and we show that with this choice we can generate nonstationary clouds. Our model can be used to improve denoising algorithms.
It is of interest to find compare optimum beamforming communications between a random antenna array of sensors and a uniform antenna array base station to MIMO communications between the two arrays. For these purposes we examine a specific example. Channel capacity is compared for various versions of MIMO communications. Channel state information is assumed to be known a.) at the receiving array only, and b) at both the transmitting and receiving arrays. When the signal to noise ratio is high, the blind transmitter and the knowledgeable transmitter MIMO provides higher channel capacity than the beamformer, but for very low signal to noise ratio only the knowledgeable transmitter MIMO equals the beamformer channel capacity.
We consider communications and network systems whose properties are characterized by the gaps of the leading eigenvalues of (A Hermitian) times A for some matrix A . We show that a sufficient and necessary condition for a large eigen-gap is that A is a "hub" matrix in the sense that it has dominant columns. We describe an application of this hub theory in multiple-input and multiple-output (MIMO) wireless systems.
The aim of this research is to recompress the JPEG standard images in order to minimize the storage and/or communications bandwidth requirements. In our approach, we convert existing JPEG images into JPEG 2000 images. The proposed image restoration method is applied to improve the visual quality when the bit rate becomes low and visually annoying artifacts appear in existing JPEG image. The JPEG restoration algorithm here makes use of the DCT quantization noise model along with a Markov random field (MRF) prior model for the original image in order to formulate the restoration algorithm in a Bayesian framework. The maximum of a posteriori (MAP) principle based convex model is applied to restore images. The restored image is then compressed with the JPEG2000. The cumulative distribution function (CDF) based visual quality metric method has been developed to measure coding artifacts in large JPEG images. Perceptual distortion analysis is also included in this paper.
Transporting MPEG-4 video over the Internet is expected to be an important component of many multimedia applications. The ISMA (Internet Streaming Media Alliance) has defied two hierarchical profiles to stream video content on wireless and narrow band networks, and over broadband quality networks. The evaluation mechanisms to assess the quality of video will play a major role in the overall design of video communication. However, the ISMA (Internet Streaming Media Alliance) MPEG-4 video quality analysis has not been reported. This paper presents statistical video quality analysis for the ISMA MPEG-4 video clips. Shannon entropies and principle components are used to interpret the degraded video quality in video sequences. Moreover, the current trends in error resilient video coding techniques and the error control/resilience solutions will be discussed.
It is generally accepted that cloud like images have an 1/fγ power spectra. We investigate whether other spectra also produce cloud like images, and we show by numerical simulation that the hypothesis is true. Also, we show how systems defined by fractional differential equations can generate such spectra.
We present an effective method for texture segmentation and analysis using a local spectral method. The method combines the advantages of a high spectral resolution of a joint representation given by the Pseudo-Wigner distribution with an effective adaptive principal component analysis. Performance of the method is evaluated using fabric samples with defects, medical images, and crack detection in metallic surfaces. The examples demonstrate the discrimination power of the present method for detecting even very subtle changes in the homogeneity of textures.
The JPEG 2000 standard is a wavelet based compression methodology that achieves nearly an order of magnitude better compression performance than the existing DCT-based JPEG standard. Thus, it is desirable to convert existing JPEG images into JPEG 2000 images in order to minimize the storage and/or communications bandwidth requirements. JPEG 2000's performance enhancement can be maximized if its Region of Interest (ROI) coding option is employed. Unfortunately, this option results in a low bit rate coder, which tends to be plagued with ringing artifacts due to the abrupt truncation of high frequency wavelet coefficients at the ROI edges. In this paper, we briefly describe a novel minimum distortion transcoding technology to convert compressed JPEG images into compressed JPEG 2000 images. Experimental results indicate that the visual quality of resulting images is improved, while the time required to download them has been dramatically deceased. We expect that these new techniques can also be applied to computer vision, medical imaging, and e-commerce applications.
A great deal of digital video quality measurement research has been performed to quantify human visual perception with compressed video clips. Since transmitted video quality is heavily dependent on the bit-rate, error rate and dropped packet rate, a new measurement paradigm is required to analyze the corrupted video. Fast eigen-based video quality metric (VQM) and visualization techniques have been developed to measure and analyze the corrupted video qualities objectively. 3-D SPIHT and MPEG-2 with forward error correction (FEC) had been tested over a Video CODEC RF Test-bed. The experimental results indicate that the proposed scheme is useful for a low complexity VQM.
Integrated Communications and Exploitation (ICE) is defined as "systems that provide end-to-end optimization from the output of the sensor to the exploitation analyst." ICE applies to the key phases of a military operation (e.g. intelligence, surveillance, reconnaissance, targeting, battle damage assessment, etc.). Recently, DoD reports, such as Network Centric Warfare, are beginning to emerge that characterize the importance of information superiority and of the Integrated Communications and Exploitation (ICE) problem space and offer recommendations and plan to address them.
KEYWORDS: Internet, Image compression, Network architectures, Image transmission, Analytical research, Data compression, Visualization, Data modeling, Data centers, Data transmission
We present compact image data structures and associated packet delivery techniques for effective Web caching architectures.
Presently, images on a web page are inefficiently stored, using a single image per file. Our approach is to use clustering to merge similar images into a single file in order to exploit the redundancy between images. Our studies indicate that a 30-50% image data size reduction can be achieved by eliminating the redundancies of color indexes. Attached to this file is new metadata to permit an easy extraction of images. This approach will permit a more efficient use of the cache, since a shorter list of cache references will be required. Packet and transmission delays can be reduced by 50% eliminating redundant TCP/IP headers and connection time. Thus, this innovative paradigm for the elimination of redundancy may provide valuable benefits for optimizing packet delivery in IP networks by reducing latency and minimizing the bandwidth requirements.
The frequency operator, Ω equivalent to i (partial derivative) δ/δt, is not necessarily Hermitian when acting on nonstationary signals. Central moment densities of Ω and its conjugate, temporal operator Τ, are proportional to the real parts of local central moments derived from the Wigner distribution. A nonrelativistic space-time-frequency Wigner distribution forms the backdrop and motivation for the present investigation.
Using artificially generated clouds we study the spectral phase and
amplitude contribution to the cloud image. This is done by reconstructing the cloud image from spectral amplitude and/or phase only. Also, images are reconstructed from partial phase and amplitude in such a way that one may control the relative contribution of the phase and amplitude. We conclude that both phase and amplitude contribute to the cloud like appearance.
KEYWORDS: Video, Video compression, Forward error correction, 3D image processing, 3D video compression, Error analysis, Satellite communications, Satellites, 3D video streaming, Computer programming
Error Resilient and Error Concealment 3-D SPIHT (ERC-SPIHT) is a joint source channel coder developed to improve the overall performance against channel bit errors without requiring automatic-repeat-request (ARQ). The objective of this research is to test and validate the properties of two competing video compression algorithms in a wireless environment. The property focused on is error resiliency to the noise inherent in wireless data communication. ERC-SPIHT and MPEG-2 with forward error correction (FEC) are currently undergoing tests over a satellite communication link. The initial test indicates that ERC-SPIHT gives excellent results in noisy channel conditions is shown to have superior performance over MPEG-2 with FEC when communicated over a military satellite channel.
A plane monochromatic electric field incident upon a metallic interface is analyzed by way of the space-time Wigner distribution. An exact calculation is made and we discuss the behavior of the Wigner distribution at the boundary and inside the metal.
We give an explicit expression for the transform of a signal in an arbitrary representation which has first been filtered in another representation. Using this formula we connect the work of Cohen for obtaining convolution and correlation theorems in arbitrary representations with the work of Lindsey and Suter for partitioning the space of integral transforms.
We investigate the motion of a single particle in transition from one equilibrium state to another via time-frequency analysis. Between quasi-stationary regimes a sudden change of state occurs, and we show that the Cohen-Lee local variance tracks well this highly nonstationary, sudden transient motion. In the quasi-stationary regime, instantaneous equilibria yield simple harmonic motion when the amplitude of oscillation is sufficiently small. Nonlinear effects induce harmonic generation for larger amplitude oscillations.
KEYWORDS: Video, 3D video compression, Forward error correction, Video processing, Video compression, Video coding, Automatic repeat request, Wavelets, Computer programming, Error control coding
Compressed video bitstreams require protection from channel errors in a wireless channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error- correcting (FEC) channel (RCPR) code combined with a single ARQ (Automatic-repeat-request) proved to be an effective means for protecting the bitstream. In this paper, the need for ARQ is eliminated by making the 3D SPIHT bitstream more robust and resistant to channel errors. Packetization of the bitstream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness is demonstrated and combined with channel coding to not only protect the integrity of the packets, but also allow detection of packet decoding failures, so that only the cleanly recovered packets are reconstructed. In extensive comparative tests, the reconstructed video is shown to be superior to that of MPEG- 2, with the margin of superiority growing substantially as the channel becomes noisier.
In previous work, the spread has been presented as a means to quantify stationarity. This is done by estimating the support of the joint time-frequency correlation function known as the expected ambiguity function. Two fundamental issues concerning the spread are addressed here. The first is that the spread is not invariant under basis transformation. We address this problem by introducing the diagonally optimized spread, based on the proposition that the spread should be calculated using the covariance that is most nearly diagonal under basis transformation. The second issue is that in previous references to spread, the availability of covariance estimates have been assumed, which is an open problem non-stationary processes. A method to provide estimates of locally stationary processes was proposed by Mallat, Papanicolaou and Zhang. In their work they derive a method which calculates the basis which most nearly diagonalize the covariance matrix in the mean square sense. This method is ideally suited to our situation, and we extend it to include calculation of the diagonally optimized spread. The optimally diagonalized spread provides an improved indicator of non-stationarity and illustrates the connections between spread and the diagonizability of the covariance of a random process.
Using exact results previously obtained we investigate the spreading of a propagating pulse due to higher order dispersion. An explicit example is considered where the spread and associated quantities are calculated exactly. We also discuss the effects of higher order dispersion on the contraction of pulses. We show that the average frequency of the initial pulse plays a much more important role when higher order terms are included.
Lindsey and Suter have shown that for many transforms the transform of the convolution of two functions have the same functional form. We explain the origin of this result and derive the condition on the transformation kernel for when this should be the case. In addition we consider the general transform of the inverse and direct scale transform and obtain condition on the kernel so that the transform gives similar functional forms.
Establishing measures for local stationarity is an open problem in the field of time-frequency analysis. One promising theoretical measure, known as the spread, provides a means for quantifying potential correlation between signal elements. In this paper we investigate the issue of generalizing techniques developed by the authors to better estimate the spread of a signal. Existing techniques estimate the spread as the rectangular region of support of the associated expected ambiguity function oriented parallel to the axes. By applying Radon Transform techniques we can produce a parameterized model which describes the orientation of the region of support providing tighter estimates of the signal spread. Examples are provided that illustrate the enhancement of the new method.
Investigating a number of different integral transforms uncovers distinct patterns in the type of scale-based convolution theorems afforded by each. It is shown that scaling convolutions behave in quite a similar fashion to translational convolution in the transform domain, such that the many diverse transforms have only a few different forms for convolution theorems. The hypothesis is put forth that the space of integral transforms is partitionable based on these forms.
Investigating a number of different integral transforms uncovers distinct patterns in the type of translation convolution theorems afforded by each. It is shown that transforms based on separable kernels (aka Fourier, Laplace and their relatives) have a form of the convolution theorem providing for a transform domain product of the convolved functions. However, transforms based on kernels not separable in the function and transform variables mandate a convolution theorem of a different type; namely in the transform domain the convolution becomes another convolution--one function with the transform of the other.
This paper presents an application of formal mathematics to create a high performance, low power architecture for time-frequency and time-scale computations implemented in asynchronous circuit technology that achieves significant power reductions and performance enhancements over more traditional approaches. Utilizing a combination of concepts from multivariate signal processing and asynchronous circuit design, a case study is presented dealing with a new architecture for the fast Fourier transform, an algorithm that requires globally shared results. Then, the generalized distributive law is presented an important paradigm for advanced asynchronous hardware design.
We study a certain nonlinear operator T from L2(R, CN) to itself under which every refinable function vector is a fixed point. The iterations Tnf of T on any f (epsilon) L2(R, CN) with the Riesz basis property are investigated; they turn out to be the 'cascade algorithm' iterates of f with weights depending on f only. The paper also gives conditions for convergence of Tnf to a limit in different topologies.
We introduce fundamentals of wavelet signal processing. We review the wavelet transform and multiresolution analysis. We illustrate their properties by simple examples.
Scalar-valued Malvar wavelets have been used to eliminate the blocking effects in scalar transform coding. In this paper, we introduce vector-valued Malvar wavelets for vector-valued signals. While constructing window vectors, we present a connection between vector-valued Malvar wavelets and vector Lemarie-Meyer band-limited wavelets. Similar to scalar-valued Malvar wavelets, vector-valued Malvar wavelets have applications in eliminating the blocking effects in vector transform coding.
KEYWORDS: Image filtering, Wavelets, Mammography, Tissues, Nonlinear filtering, Breast cancer, Electronic filtering, Signal to noise ratio, Analytical research, Data processing
An automated method for detecting microcalcification clusters is presented. The algorithm begins with a digitized mammogram and outputs the center coordinates of regions of interest (ROIs). The method presented uses a non-linear function and a 12-tap least asymmetric Daubechies (LAD12) wavelet in a tree structured filter bank to increase the signal to noise level by 10.26 dB. The signal to noise level gain achieved by the filtering allows subsequent thresholding to eliminate on average 90% of the image from further consideration without eliminating actual microcalcification clusters 95% of the time. Morphological filtering and texture analysis are then used to identify individual microcalcifications. Altogether, the method successfully detected 44 of 53 microcalcification clusters (83%) with an average of 2.3 false positive clusters per image. A cluster is considered detected if it contains 3 or more microcalcifications within a 6.4 mm by 6.4 mm area. The method successfully detected 13 of the 14 malignant cases (93%).
The pyramid algorithm for computing single wavelet transform coefficients is well-known. The pyramid algorithm can be implemented by using tree-structured multirate filter banks. In this paper, we propose a general algorithm to compute multiwavelet transform coefficients, by adding proper pre multirate filter banks before the vector filter banks that generate multiwavelets. The proposed algorithm can be though of as a discrete vector-valued wavelet transform for certain discrete-time vector-valued signals. The proposed algorithm can be also though of as a discrete multiwavelet transform for discrete-time signals. We then present some numerical experiments to illustrate the performance of the algorithm, which indicates that the energy compaction for discrete multiwavelet transforms may be better than the one for conventional discrete wavelet transforms.
In this research, we introduce vector-valued multiresolution analysis and vector-valued wavelets for vector-valued signal spaces. We construct vector-valued wavelets by using paraunitary vector filter bank theory. In particular, we construct vector-valued Meyer wavelets that are band-limited. We classify and construct vector-valued wavelets with sampling property. As an application of vector-valued wavelets, multiwavelets can be constructed from vector-valued wavelets. We show that certain linear combinations of known scalar-valued wavelets may yield multiwavelets. We then present discrete vector wavelet transforms for discrete-time vector-valued (or blocked) signals, which can be thought of as a family of unitary vector transforms. In applications of vector wavelet transforms in two dimensional transform theory, the nonseparability can be easily handled.
Malvar wavelets or lapped orthogonal transform has been recognized as a useful tool in eliminating block effects in transform coding. Suter and Oxley extended the Malvar wavelets to more general forms, which enable one to construct an arbitrary orthonormal basis on different intervals. In this paper, we generalize the idea in Suter and Oxley from 1D to 2D cases and construct nonseparable Malvar wavelets, which is potentially important in multidimensional signal analysis. With nonseparable Malvar wavelets, we then construct nonseparable Lemarie-Meyer wavelets which are band-limited.
The Backus-Gilbert (BG) method provides an algorithm for solving the moment problem. Its performance can be greatly improved by incorporating an appropriate signal model, e.g., the bandlimited signals. In this research, we introduce a practical signal model called the scale-time limited signal spaces and generalize the Backus-Gilbert (BG) method for this class of signals. Since the model proposed in this work includes general wavelet bases such as the modulated Gaussians and the Mexican hat which have simple analytic forms, the required computation can be reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.