This research introduces a mode-specific model of visual saliency that can be used to highlight likely lesion locations
and potential errors (false positives and false negatives) in single-mode PET and MRI images and multi-modal fused
PET/MRI images. Fused-modality digital images are a relatively recent technological improvement in medical imaging;
therefore, a novel component of this research is to characterize the perceptual response to these fused images. Three
different fusion techniques were compared to single-mode displays in terms of observer error rates using synthetic
human brain images generated from an anthropomorphic phantom. An eye-tracking experiment was performed with
naïve (non-radiologist) observers who viewed the single- and multi-modal images. The eye-tracking data allowed the
errors to be classified into four categories: false positives, search errors (false negatives never fixated), recognition errors
(false negatives fixated less than 350 milliseconds), and decision errors (false negatives fixated greater than 350
milliseconds). A saliency model consisting of a set of differentially weighted low-level feature maps is derived from the
known error and ground truth locations extracted from a subset of the test images for each modality. The saliency model
shows that lesion and error locations attract visual attention according to low-level image features such as color,
luminance, and texture.
KEYWORDS: Tissues, Breast, Blood, Skin, Signal attenuation, Monte Carlo methods, 3D modeling, Raster graphics, Positron emission tomography, Natural surfaces
The quality and realism of simulated images is currently limited by the quality of the digital phantoms used for the simulations. The transition from simple raster based phantoms to more detailed geometric (mesh) based phantoms has the potential to increase the usefulness of the simulated data. A preliminary breast phantom which contains 12 distinct tissue classes along with the tissue properties necessary for the simulation of dynamic positron emission tomography scans was created (activity and attenuation). The phantom contains multiple components which can be separately manipulated, utilizing geometric transformations, to represent populations or a single individual being imaged in multiple positions. A new relational descriptive language is presented which conveys the relationships between individual mesh components. This language, which defines how the individual mesh components are composed into the phantom, aids in phantom development by enabling the addition and removal of components without modification of the other components, and simplifying the definition of complex interfaces. Results obtained when testing the phantom using the SimSET PET/SPECT simulator are very encouraging.
A recently developed, freely available, application specifically designed for the visualization of multimodal data sets is
presented. The application allows multiple 3D data sets such as CT (x-ray computer tomography), MRI (magnetic
resonance imaging), PET (positron emission tomography), and SPECT (single photon emission tomography) of the same
subject to be viewed simultaneously. This is done by maintaining synchronization of the spatial location viewed within
all modalities, and by providing fused views of the data where multiple data sets are displayed as a single volume.
Different options for the fused views are provided by plug-ins. Plug-ins typically used include color-overlays and
interlacing, but more complex plug-ins such as those based on different color spaces, and component analysis techniques
are also supported.
Corrections for resolution differences and user preference of contrast and brightness are made. Pre-defined and custom
color tables can be used to enhance the viewing experience. In addition to these essential capabilities, multiple options
are provided for mapping 16-bit data sets onto an 8-bit display, including windowing, automatically and dynamically
defined tone transfer functions, and histogram based techniques.
The 3D data sets can be viewed not only as a stack of images, but also as the preferred three orthogonal cross sections
through the volume. More advanced volumetric displays of both individual data sets and fused views are also provided.
This includes the common MIP (maximum intensity projection) both with and without depth correction for both
individual data sets and multimodal data sets created using a fusion plug-in.
KEYWORDS: Breast, Finite element methods, Image registration, Chemical elements, Magnetic resonance imaging, Skin, 3D modeling, Motion models, 3D image processing, Breast cancer
We implemented a new approach to intramodal non-rigid 3D breast image registration. Our method uses fiducial skin markers (FSM) placed on the breast surface. After determining the displacements of FSM, finite element method (FEM) is used to distribute the markers’ displacements linearly over the entire breast volume using the analogy between the orthogonal components of the displacement field and a steady state heat transfer (SSHT). It is valid because the displacement field in x, y and z direction and a SSHT problem can both be modeled using LaPlace’s equation and the displacements are analogous to temperature differences in SSHT. It can be solved via standard heat conduction FEM software with arbitrary conductivity of surface elements significantly higher than that of volume elements. After determining the displacements of the mesh nodes over the entire breast volume, moving breast volume is registered to target breast volume using an image warping algorithm. Very good quality of the registration was obtained. Following similarity measurements were estimated: Normalized Mutual Information (NMI), Normalized Correlation Coefficient (NCC) and Sum of Absolute Valued Differences (SAVD). We also compared our method with rigid registration technique.
We are developing a method using nonrigid co-registration of PET and MR breast images as a way to improve diagnostic specificity in difficult-to-interpret mammograms, and ultimately to avoid biopsy. A deformable breast model based on a finite-element method (FEM) has been employed. The FEM “loads” are taken as the observed intermodality displacements of several fiducial skin markers placed on the breast and visible in PET and MRI. The analogy between orthogonal components of the displacement field and the temperature differences in a steady-state heat transfer (SSHT) in solids has been adopted. The model allows estimation, throughout the breast, of the intermodality displacement field. To test our model, an elastic breast phantom with simulated internal "lesions" and external markers was imaged with PET and MRI. We have estimated fiducial- and target-registration errors vs. number and location of fiducials, and have shown that the SSHT approach using external fiducial markers is accurate to within ~5 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.