Optical Coherence Tomography (OCT) has shown its detection and diagnostic capabilities for otitis media (OM), enabling visualization through scattering tissues including the tympanic membrane and biofilms, and into the middle ear cavity. Preliminary results from an ongoing five-year 235-subject study at Children’s Wisconsin, Medical College of Wisconsin, are presented. A vision-language machine learning model was trained on OCT image features and clinical metadata to differentiate OM disease states and predict required interventions. This study demonstrates the prognostic value of OCT in assessing OM and offers the potential for improving the management of patients with OM.
We report an integrated system for rapid sample-to-answer detection of a viral pathogen in a droplet of whole blood comprised of a two-stage microfluidic cartridge for sample processing and nucleic acid amplification, and a clip-on detection instrument that interfaces with the image sensor of a smartphone. The cartridge is designed to release RNA from the Zika virus in whole blood using chemical lysis, followed by mixing with the assay buffer for performing reverse-transcriptase loop-mediated isothermal amplification (RT-LAMP) reactions in six parallel microfluidic compartments. The battery-powered instrument heats the compartments from below, while LEDs illuminate from above. Fluorescence generation in the compartments is dynamically monitored by a smartphone camera. We characterize the assay time and detection limits for detecting Zika RNA and gamma-irradiated Zika virus spiked into buffer and whole blood and compare the performance of the same assay when conducted in conventional PCR tubes. Our approach for kinetic monitoring of the fluorescence-generating process in the microfluidic compartments enables spatial analysis of early fluorescent “bloom” events for positive samples. We show that dynamic image analysis reduces the time required to designate an assay as a positive test to 22 minutes, compared to ~30-45 minutes for conventional analysis of the average fluorescent intensity of the entire compartment. We achieve a total sample-to-answer time in the range of 17-32 minutes, while demonstrating a viral RNA detection as low as 2.70×102 copies/ul, and a gamma-irradiated virus of 103 virus particles in a single 12.5 microliter droplet blood sample.
The phase contrast (PC) method is one of the most impactful developments in the four-century long history of microscopy. It allows for intrinsic, nondestructive contrast of transparent specimens, such as live cells. However, PC is plagued by the halo artifact, a result of insufficient spatial coherence in the illumination field, which limits its applicability. We present a new approach for retrieving halo-free phase contrast microscopy (hfPC) images by upgrading the conventional PC microscope with an external interferometric module, which generates sufficient data for reversing the halo artifact. Measuring four independent intensity images, our approach first measures haloed phase maps of the sample. We solve for the halo-free sample transmission function by using a physical model of the image formation under partial spatial coherence. Using this halo-free sample transmission, we can numerically generate artifact-free PC images. Furthermore, this transmission can be further used to obtain quantitative information about the sample, e.g., the thickness with known refractive indices, dry mass of live cells during their cycles. We tested our hfPC method on various control samples, e.g., beads, pillars and validated its potential for biological investigation by imaging live HeLa cells, red blood cells, and neurons.
We present an approach for automatic diagnosis of tissue biopsies. Our methodology consists of a quantitative phase imaging tissue scanner and machine learning algorithms to process these data. We illustrate the performance by automatic Gleason grading of prostate specimens. The imaging system operates on the principle of interferometry and, as a result, reports on the nanoscale architecture of the unlabeled specimen. We use these data to train a random forest classifier to learn textural behaviors of prostate samples and classify each pixel in the image into different classes. Automatic diagnosis results were computed from the segmented regions. By combining morphological features with quantitative information from the glands and stroma, logistic regression was used to discriminate regions with Gleason grade 3 versus grade 4 cancer in prostatectomy tissue. The overall accuracy of this classification derived from a receiver operating curve was 82%, which is in the range of human error when interobserver variability is considered. We anticipate that our approach will provide a clinically objective and quantitative metric for Gleason grading, allowing us to corroborate results across instruments and laboratories and feed the computer algorithms for improved accuracy.
The current tissue evaluation method for breast cancer would greatly benefit from higher throughput and less inter-observer variation. Since quantitative phase imaging (QPI) measures physical parameters of tissue, it can be used to find quantitative markers, eliminating observer subjectivity. Furthermore, since the pixel values in QPI remain the same regardless of the instrument used, classifiers can be built to segment various tissue components without need for color calibration. In this work we use a texton-based approach to segment QPI images of breast tissue into various tissue components (epithelium, stroma or lumen). A tissue microarray comprising of 900 unstained cores from 400 different patients was imaged using Spatial Light Interference Microscopy. The training data were generated by manually segmenting the images for 36 cores and labelling each pixel (epithelium, stroma or lumen.). For each pixel in the data, a response vector was generated by the Leung-Malik (LM) filter bank and these responses were clustered using the k-means algorithm to find the centers (called textons). A random forest classifier was then trained to find the relationship between a pixel’s label and the histogram of these textons in that pixel’s neighborhood. The segmentation was carried out on the validation set by calculating the texton histogram in a pixel’s neighborhood and generating a label based on the model learnt during training. Segmentation of the tissue into various components is an important step toward efficiently computing parameters that are markers of disease. Automated segmentation, followed by diagnosis, can improve the accuracy and speed of analysis leading to better health outcomes.
In this paper, we present an updated automatic diagnostic procedure for prostate cancer using quantitative phase imaging (QPI). In a recent report [1], we demonstrated the use of Random Forest for image segmentation on prostate cores imaged using QPI. Based on these label maps, we developed an algorithm to discriminate between regions with Gleason grade 3 and 4 prostate cancer in prostatectomy tissue. The Area-Under-Curve (AUC) of 0.79 for the Receiver Operating Curve (ROC) can be obtained for Gleason grade 4 detection in a binary classification between Grade 3 and Grade 4. Our dataset includes 280 benign cases and 141 malignant cases. We show that textural features in phase maps have strong diagnostic values since they can be used in combination with the label map to detect presence or absence of basal cells, which is a strong indicator for prostate carcinoma. A support vector machine (SVM) classifier trained on this new feature vector can classify cancer/non-cancer with an error rate of 0.23 and an AUC value of 0.83.
We provide a quantitative model for image formation in common-path QPI systems under partially coherent illumination. Our model is capable of explaining the phase reduction phenomenon and halo effect in phase measurements. We further show how to fix these phenomena with a novel iterative post-processing algorithm. Halo-free and correct phase images of nanopillars and live cells are used to demonstrate the validity of our method.
We report, for the first time, the use of Quantitative Phase Imaging (QPI) images to perform automatic prostate cancer diagnosis. A machine learning algorithm is implemented to learn textural behaviors of prostate samples imaged under QPI and produce labeled maps of different regions for testing biopsies (e.g. gland, stroma, lumen etc.). From these maps, morphological and textural features are calculated to predict outcomes of the testing samples. Current performance is reported on a dataset of more than 300 cores of various diagnosis results.
We propose a vector space approach for relighting a Lambertian convex object with distant light source, whose
crucial task is the decomposition of the reflectance function into albedos (or reflection coefficients) and lightings
based on a set of images of the same object and its 3-D model. Making use of the fact that reflectance functions
are well approximated by a low-dimensional linear subspace spanned by the first few spherical harmonics, this
inverse problem can be formulated as a matrix factorization, in which the basis of the subspace is encoded in
the spherical harmonic matrix S. A necessary and sufficient condition on S for unique factorization is derived
with an introduction to a new notion of matrix rank called nonseparable full rank. An SVD-based algorithm for
exact factorization in the noiseless case is introduced. In the presence of noise, the algorithm is slightly modified
by incorporating the positivity of albedos into a convex optimization problem. Implementations of the proposed
algorithms are done on a set of synthetic data.
Fourier transform infrared (FT-IR) spectroscopic imaging is a powerful tool to obtain chemical information from
images of heterogeneous, chemically diverse samples. Significant advances in instrumentation and data processing
in the recent past have led to improved instrument design and relatively widespread use of FT-IR imaging, in a
variety of systems ranging from biomedical tissue to polymer composites. Various techniques for improving signal
to noise ratio (SNR), data collection time and spatial resolution have been proposed previously. In this paper
we present an integrated framework that addresses all these factors comprehensively. We utilize the low-rank
nature of the data and model the instrument point spread function to denoise data, and then simultaneously
deblurr and estimate unknown information from images, using a Bayesian variational approach. We show that
more spatial detail and improved image quality can be obtained using the proposed framework. The proposed
technique is validated through experiments on a standard USAF target and on prostate tissue specimens.
Depth camera is a new technology that has potential to radically change the way humans record the world and
interact with 3D virtual environments. With depth camera, one can have access to depth information up to
30 frames per second, which is much faster than previous 3D scanners. This speed enables new applications,
in that objects are no longer required to be static for 3D sensing. There is, however, a trade-off between the
speed and the quality of the results. Depth images acquired with current depth cameras are noisy and have low
resolution, which poses a real obstacle to incorporating the new 3D information into computer vision techniques.
To overcome these limitation, the speed of depth camera could be leveraged to combine data from multiple depth
frames together. Thus, we need a good registration and integration method that is specifically designed for such
low quality data. To achieve that goal, in this paper we propose a new method to register and integrate multiple
depth frames over time onto a global model represented by an implicit moving least square surface.
We study a sampling problem where sampled signals come from a known union of shift-invariant subspaces
and the sampling operator is a linear projection of the sampled signals into a fixed shift-invariant subspace.
In practice, the sampling operator can be easily implemented by a multichannel uniform sampling procedure.
We present necessary and sufficient conditions for invertible and stable sampling operators in this framework,
and provide the corresponding minimum sampling rate. As an application of the proposed general sampling
framework, we study the specific problem of spectrum-blind sampling of multiband signals. We extend the previous results of Bresler et al. by showing that a large class of sampling kernels can be used in this sampling problem, all of which lead to stable sampling at the minimum sampling rate.
With the ever increasing computational power of modern day processors, it has become feasible to use more
robust and computationally complex algorithms that increase the resolution of images without distorting edges
and contours. We present a novel image interpolation algorithm that uses the new contourlet transform to
improve the regularity of object boundaries in the generated images. By using a simple wavelet-based linear
interpolation scheme as our initial estimate, we use an iterative projection process based on two constraints
to drive our solution towards an improved high-resolution image. Our experimental results show that our new
algorithm significantly outperforms linear interpolation in subjective quality, and in most cases, in terms of
PSNR as well.
We study the problem of signal reconstruction from a periodical nonuniform set of samples. The considered
system takes samples of delayed versions of a continuous signal at low sampling rate, with different fractional
delays for different channels. We design IIR synthesis filters so that the overall system approximates a sampling
system of high sampling rate using techniques from model-matching problem in control theory with available
software (such as Matlab). Unlike traditional signal processing methods, our approach uses techniques from
control theory which convert systems with fractional delays into H-norm-equivalent discrete-time systems. The
synthesis filters are designed so that they minimize the H(infinity) norm of the error system. As a consequence, the
induced error is uniformly small over all (band-limited and band-unlimited) input signals. The experiments are
also run for synthesized images.
We present the characterization and design of multidimensional oversampled FIR filter banks. In the polyphase domain, the perfect reconstruction condition for an oversampled filter bank amounts to the invertibility of the analysis polyphase matrix, which is a rectangular FIR matrix. For a nonsubsampled FIR filter bank, its analysis polyphase matrix is the FIR vector of analysis filters. A major challenge is how to extend algebraic geometry techniques, which only deal with polynomials (that is, causal filters), to handle general FIR filters. We propose a novel method to map the FIR representation of the nonsubsampled filter bank into a polynomial one by simply introducing a new variable. Using algebraic geometry and Groebner bases, we propose the existence, computation, and characterization of FIR synthesis filters given FIR analysis filters. We explore the design problem of MD nonsubsampled FIR filter banks by a mapping approach. Finally, we extend these results to general oversampled FIR filter banks.
This paper reviews recent best basis search algorithms. The problem under consideration is to select a representation from a dictionary which minimizes an additive cost function for a given signal. We describe a new framework of multitree dictionaries, and an efficient algorithm for finding the best representation in a multitree dictionary. We illustrate the algorithm through image compression examples.
We present two-dimensional filter banks with directional vanishing moments. The directional-vanishing-moment condition is crucial for
the regularity of directional filter banks. However, it is a challenging task to design orthogonal filter banks with directional vanishing moments. Due to the lack of multidimensional factorization theorems, traditional one-dimensional methods cannot be extended to higher dimensional cases. Kovacevic and Vetterli investigated the design of two-dimensional orthogonal filter banks and proposed a set of closed-form solutions called the lattice structure, where the polyphase matrix of the filter bank is characterized with a set of rotation parameters. Orthogonal filter banks with lattice structures have simple implementation. We propose a method of designing orthogonal filter banks with directional vanishing moments based on this lattice structure. The constraint of directional vanishing moments is imposed on the rotation parameters. We find the solutions of rotation parameters have special structure. Based on this structure, we find the closed-form solution for orthogonal filter banks with directional vanishing moments.
In 1992, Bamberger and Smith proposed the directional filter bank (DFB) for an efficient directional decomposition of two-dimensional (2-D) signals. Due to the nonseparable nature of the system, extending the DFB to higher dimensions while still retaining its attractive features is a challenging and previously unsolved problem. This paper proposes a new family of filter banks, named 3DDFB, that can achieve the directional decomposition of 3-D signals with a simple and efficient tree-structured construction. The ideal passbands of the proposed 3DDFB are rectangular-based pyramids radiating out from the origin at different orientations and tiling the whole frequency space. The proposed 3DDFB achieves perfect reconstruction. Moreover, the angular resolution of the proposed 3DDFB can be iteratively refined by invoking more levels of decomposition through a simple expansion rule. We also introduce a 3-D directional multiresolution decomposition, named the surfacelet transform, by combining the proposed 3DDFB with the Laplacian pyramid. The 3DDFB has a redundancy factor of 3 and the surfacelet transform has a redundancy factor up to 24/7.
Recent studies in linear inverse problems have recognized the sparse
representation of unknown signal in a certain basis as an useful and effective prior information to solve those problems. In many multiscale bases (e.g. wavelets), signals of interest (e.g. piecewise-smooth signals) not only have few significant coefficients, but also those significant coefficients are well-organized in trees. We propose to exploit the tree-structured sparse representation as additional prior information for linear inverse problems with limited numbers of measurements. We present numerical results showing that exploiting the sparse tree representations lead to better reconstruction while requiring less time compared to methods that only assume sparse representations.
In this paper we discuss recent developments on design tools and methods for multidimensional filter banks in the context of directional multiresolution representations. Due to the inherent non-separability of the filters and the lack of multi-dimensional factorization tools, one generally has to overcome factorization by indirect methods. One such method is the mapping technique. In the context of contourlets we review methods for designing filters with directional vanishing moments (DVM). The DVM property is crucial in guaranteeing the non-linear approximation efficacy of contourlets. Our approach allows for easy design of two-channel linear-phase filter banks with DVM of any order. Next we study the design via mapping of nonsubsampled filter banks. Our methodology allows for a fast implementation through ladder steps. The proposed design is then used to construct the nonsubsampled contourlet transform which is particularly efficiently in image denoising, as experiments in this paper show.
KEYWORDS: Absorption, Scattering, Optical coherence tomography, Spectroscopy, Signal attenuation, Tissues, Signal to noise ratio, Time-frequency analysis, Monte Carlo methods, Optical spectroscopy
We report a new algorithm for spectroscopic optical coherence tomography (SOCT) that is theoretically optimal for extracting the spectral absorption profiles from turbid media when absorbing contrast agents are used. The algorithm is based on least-squares fitting of the extracted total attenuation spectra to the known absorption spectra of the contrast agents, while suppressing the contributions from spectrally dependent scattering attenuation. By this algorithm, the depth resolved contrast agent concentration can be measured even in the presence of high scattering. The accuracy and noise tolerance of the algorithm are analyzed by Monte-Carlo simulation. The algorithm was tested using single and multi-layer tissue phantoms.
We propose a new subspace decomposition scheme called anisotropic wavelet packets which broadens the existing definition of 2-D wavelet packets. By allowing arbitrary order of row and column decompositions, this scheme fully considers the adaptivity, which helps find the best bases to represent an image. We also show that the number of candidate tree structures in the anisotropic case is much larger than isotropic case. The greedy algorithm and double-tree algorithm are then presented and experimental results are shown.
Directional multiresolution image representations have lately attracted much attention. A number of new systems, such as the curvelet transform and the more recent contourlet transform, have been proposed. A common issue of these transforms is the redundancy in representation, an undesirable feature for certain applications (e.g. compression). Though some critically sampled transforms have also been proposed in the past, they can only provide limited directionality or limited flexibility in the frequency decomposition. In this paper, we propose a filter bank structure achieving a nonredundant multiresolution and multidirectional expansion of images. It can be seen as a critically sampled version of the original contourlet transform (hence the name CRISP-contourets) in the sense that the corresponding frequency decomposition is similar to that of contourlets, which divides the whole spectrum both angularly and radially. However, instead of performing the multiscale and directional decomposition steps separately as is done in contourlets, the key idea here is to use a combined iterated nonseparable filter bank for both steps. Aside from critical sampling, the proposed transform possesses other useful properties including perfect reconstruction, flexible configuration of the number of directions at each scale, and an efficient tree-structured implementation.
In this paper, we illustrate how a recently proposed wavelet-based estimation scheme for 2-D multichannel signals can utilize an overcomplete wavelet expansion or the BayesShrink adaptive wavelet-domain threshold to improve estimation results. The existing technique approximates the optimal estimator using a DFT and an orthonormal 2-D DWT to efficiently decorrelate the signal in both channel and space, and a wavelet-domain threshold to suppress the noise. Although this technique typically yields signal-to-noise ratio (SNR) gains of over 12 dB, results can be improved 1 to 1.5 dB by replacing the critically-sampled wavelet expansion with an overcomplete wavelet expansion. In addition, provided that the detail subbands of the original signal channels each obey a generalized Gaussian distribution, average channel SNR gains can be improved 3 dB or more using the BayesShrink adaptive wavelet-domain threshold.
The contourlet transform is a new extension to the wavelet transform in two dimensions using non-separable and directional filter banks. The contourlet expansion is composed of basis images oriented at varying directions in multiple scales, with flexible aspect ratios. With this rich set of basis images, the contourlet transform can effectively capture the smooth contours that are the dominant features in natural images with only a small number of coefficients.
We begin with a detail study of the statistics of the contourlet coefficients of natural images, using histogram estimates of the marginal and joint distributions, and mutual information measurements to characterize the dependencies between coefficients. The study reveals the non-Gaussian marginal statistics and strong intra-subband, cross-scale, and cross-orientation dependencies of contourlet coefficients. It is also found that conditioned on the magnitudes of their generalized neighborhood coefficients, contourlet
coefficients can approximately be modeled as Gaussian variables with variances directly related to the generalized neighborhood magnitudes. Based on these statistics, we model contourlet coefficients using a hidden Markov tree (HMT) model that can capture all of their inter-scale, inter-orientation, and intra-subband dependencies. We experiment this model in the image denoising and
texture retrieval applications where the results are very promising. In denoising, contourlet HMT outperforms wavelet HMT and other classical methods in terms of both peak signal-to-noise ratio (PSNR) and visual quality. In particular, it preserves edges and oriented features better than other existing methods. In texture retrieval, it shows improvements in performance over wavelet methods for various oriented textures.
KEYWORDS: Wavelets, Image processing, Signal to noise ratio, Denoising, Image filtering, Anisotropy, Linear filtering, Information visualization, Image compression, Wavelet transforms
Recently, the contourlet transform has been developed as a true two-dimensional representation that can capture the geometrical structure
in pictorial information. Unlike other transforms that were initially
constructed in the continuous-domain and then discretized for sampled data, the contourlet construction starts from the discrete-domain using filter banks, and then convergences to a continuous-domain expansion via a multiresolution analysis framework. In this paper we study the approximation behavior of the contourlet expansion for two-dimensional piecewise smooth functions resembling natural images. Inspired by the vanishing moment property which is the key for the good approximation behavior of wavelets, we introduce the directional vanishing moment condition for contourlets. We show that with anisotropic scaling and sufficient directional vanishing moments, contourlets essentially achieve the optimal approximation rate, O((log M)3M-2) square error with a best M-term approximation, for 2-D piecewise smooth functions with C2 contours. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications.
It is a challenging task to design orthogonal filter banks, especially multidimensional (MD) ones. In the one-dimensional (1D) two-channel finite impulse response (FIR) filter bank case, several design methods exist. Among them, designs based on spectral factorizations (by Smith and Barnwell) and designs based on lattice
factorizations (by Vaidynanathan and Hoang) are the most effective and widely used. The 1D two-channel infinite impulse response (IIR) filter banks and associated wavelets were considered by Herley and Vetterli. All of these design methods are based on spectral factorization. Since in multiple dimensions, there is no factorization
theorem, traditional 1D design methods fail to generalize. Tensor products can be used to construct MD orthogonal filter banks from 1D orthogonal filter banks, yielding separable filter banks. In contrast to separable filter banks, nonseparable filter banks are designed directly, and result in more freedom and better frequency selectivity. In the FIR case, Kovacevic and Vetterli designed specific two-dimensional and three-dimensional nonseparable FIR orthogonal filter banks. In the IIR case, there are few design results (if any) for MD orthogonal IIR filter banks. To design orthogonal filter banks, we must design paraunitary matrices,
which leads to solving sets of nonlinear equations. The Cayley transform establishes a one-to-one mapping between paraunitary
matrices and para-skew-Hermitian matrices. In contrast to nonlinear equations, the para-skew-Hermitian condition amounts to linear constraints on the matrix entries which are much easier to
solve. We present the complete characterization of both paraunitary FIR matrices and paraunitary IIR matrices in the Cayley domain. We also propose efficient design methods for MD orthogonal filter banks and corresponding methods to impose the vanishing-moment condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.