This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Medical imaging modalities: x-ray CT, MRI, PET, SPECT, and some US imaging, belong to the non-diffraction computed tomography (CT), in which the interaction model and the external measurements are characterized by straight line integrals of some indexes of the object and the image reconstruction is based on Fourier Slice theorem. This feature transfers statistics of physical measurements (e.g. projections in CT and k-space samples in MRI) to their image domain.
Statistical properties at four levels of image appearances: a single pixel – Gaussianity; any two pixels – spatially asymptotical independency, exponential correlation coefficient; a group of pixels – Markovianity, stationary, ergodicity; and the entire image – Finite normal mixture, are derived for each modality and showed to be same for all non-diffraction CT. These common properties lead to a unified stochastic image model and the model – based image analysis technique.
Comparing with other image analysis methods such as Graph approach, Classical snake and Active contour, Level set, ASM and AAM, FC object delineation, and MRF-based methods, this image analysis method possesses many analytic and computational advantages. Quantitative evaluations of its performance not only provide the theoretically approachable accuracy limits of this method, but also give the practically achievable performance for the given images.
Theoretical developments, computational algorithms, and results from simulation, phantom images, and real medical images of different modalities of non-diffraction CT obtained by using this image analysis method are given in details. Examples of its applications of this image analysis method to the diagnostic and therapy are included. Potential applications of statistical properties of non-diffraction CT images to functional image analysis are also demonstrated.
A dynamic version of neuroimaging data analysis not only emphasizes the functional specifications of brain regions, but also focuses on the massively parallel nature of distributed and interacting regions that are hypothesized to process the functional tasks under investigations. Studying brain interactions leads to an emerging field: Functional Connectivity (FC).
FC between two brain units (neuron columns, recording sites, regions) are generally defined as the temporal correlation among their time courses. Its objective is to capture the dynamic, context-dependent processes that may lead to preferential recruitment of some units over others.
Based on the traditional Time Series theory, five classical measures - Coherence, Synchronization, Mutual Information, Nonlinear correlation coefficient, and Phase-Locking value are developed to assess FC. They are applied to different neuroimaging disciplines: functional Magnetic Resonance Imaging (fMRI), Magnetoencephalography (MEG), and Electroencephalography (EEG).
To tackle problems inherent in classical measures: (a) assumptions of stationarity of Time Series and time-invariance of FC over the entire time course, and (b) the lack of directional information flow between brain units, the methods for dynamic measures of FC and real time measure of FC have been developed.
Rigorous examinations from theoretical methodology perspective, statistical reasoning, and quantitative evaluations, to each measure, are presented. The relations between these measures that provide the basis for consistent assessment and interpretation on FC are given. Examples from real neuroimaging data demonstrate that FC can serve as biomarkers for brain functions.
A sensor array outperforms a single sensor in source detection and identification for the purposes of accuracy, precision, resolution, and efficiency. Examples of sensor arrays include the Very Large Baseline Array (VLBA), Phase Array Radar (PAR), and Synthetic Aperture Radar (SAR). <br/>
An image of a scene generated by various sensor arrays is a visual representation of the mixture of signals from sources in the scene. By sampling the image into sub-images, source detection and identification in image analysis is translated into a sensor array signal processing framework, where distinctive object regions in the image and the sampled sub-images serve as sources and sensors. Further sampling of the sub-images and averaging pixel data yield observations for the sensors, and an analysis of the resulting covariance matrix provides information on the number of sources and source identification. . <br/>
The theoretical and experimental results obtained by applying this approach on images generated by various sensor array systems are shown to be in good agreement. The similarities and differences between this approach and (1) Independent Component Analysis (ICA), (2) Time Series Autoregressive (AR) model, (3) Multispectral image analysis, and (4) Multivariate image analysis are described.
The first half of the course will provide an introduction to the basic principles of magnetic resonance imaging (MRI). The fundamentals of nuclear magnetism will first be described, followed by mechanisms of relaxation and image contrast. Then, principles of spatial encoding and the concept of k-space will be discussed, in conjunction with an investigation of the basic MRI pulse sequences. The sources of noise and common image artifacts observed in MRI will also be examined, with a discussion on means for their reduction.
The second half of the course will describe statistics of MRI based on its physical imaging principles and mathematical image reconstruction procedures. Intrinsic statistics of MR data (bulk magnetizations, MR signals, and k-space samples) will be introduced first. Then, a set of statistical properties of MR images (pixels, regions, and images) will be discussed. The proved stochastic image models for MR image, the model-based image analysis methodologies and their performance evaluation will be presented.
This course presents a theoretical framework for statistics of medical imaging. Statistical investigation into medical imaging technology not only provides a better understanding its intrinsic features (analysis), but also leads to the improved design of technology (synthesis). With the rapid development of medical imaging has come a realization of the need for a complete statistical study.
The course content begins with a statistical description of x-ray computed tomography (CT) [Part I], Magnetic Resonance Imaging (MRI) [Part II], and non-diffraction computed tomography [Part III], based on their physical imaging principles and mathematical image reconstruction procedures.
Part I and Part II, respectively, 1) introduce the statistics of CT and MR data (photons, attenuation coefficients, and projections of CT, magnetizations, MR signals, and k-space samples of MR), covering both signal and noise components; 2) describe statistical properties of CT and MR images (at pixels, regions, and images), including Gaussianity, spatially asymptotical independency, exponential correlation coefficient, stationarity, ergodicity, autocorrelation function and spectral density function, and 3) present stochastic models, the model-based image analysis methods, and the performance evaluation of these methods.
Part III, based on the physical nature of source-medium interactions, shows that CT, MRI, and certain types of other medical imaging modalities belong to non-diffraction computed tomography, and explains why they have common statistical properties in their images.
View contact details
No SPIE Account? Create one