Digital breast tomosynthesis (DBT) is an emerging x-ray breast imaging modality that scans the breast from multiple angles, allowing reconstruction of the breast's interior into a pseudo-3D image. While optimization variables in mammography are limited to x-ray tube voltage and exposure, DBT offers additional optimization possibilities such as scan angular range. Previous studies have established that wide-angle DBT excels in detecting larger objects, such as tumors, while narrow-angle DBT is superior in detecting smaller structures such as microcalcifications. Therefore, it would be advantageous to choose an option between narrow- and wide-angle scans in a patient-specific manner. In this study, we propose a method that utilizes pre-exposure scan data obtained during the automatic exposure control (AEC) process immediately before actual DBT scanning to predict patient lesion information in advance. We generated standard dose mammography and DBT pre-exposure scan using Monte Carlo-based numerical simulation. We trained a U-Net with added WGAN loss using this pair. Using this model, we synthesized pseudo-pre-exposure images from a real mammography dataset. Subsequently, a YOLO-based classification network was employed to distinguish whether masses were present or absent in the corresponding pre-exposure images. The trained network demonstrated an accuracy of 0.87 and an AUROC of 0.95, which is comparable to those of a classifier network using conventional mammography. A paired t-test also suggests that there is no statistically significant difference between the classifiers (t = 0.22). This study may contribute to enhancing breast cancer detection performance by proposing a patient-specific DBT scan range option.
Digital breast tomosynthesis (DBT) provides pseudo-3D images by acquiring limited angle projections, thus alleviating an inherent limitation of tissue superposition in digital mammography (DM). DBT performance, however, may have limitations in terms of recovery of low-contrast structures and accuracy of material decomposition due to scatter radiation. Employing an anti-scatter grid in DBT can mitigate scatter radiation; however, this would lead to the loss of primary radiation. To compensate for the loss, an increased radiation dose is necessary. Additionally, it requires extra manufacturing costs and adds to the system’s complexity. In this work, we propose a deep-learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT. Unlike conventional kernel-based methods which estimate the scatter field based on the value of an individual pixel, the proposed method generates the scatter amplitude and width maps through a network. Additionally, the asymmetric factor map is also estimated from the network to accommodate local variations in conjunction with the object thickness and shape variation. Experiments demonstrate the superiority of the proposed approach. We believe the clinical impact of the proposed method is high since it can negate the additional radiation dose and the system complexity associated with integrating an anti-scatter grid in the DBT system.
Stereotactic body radiotherapy (SBRT) has been widely used to treat spinal bone metastases. However, it has been reported that individuals may suffer from vertebral compression fracture (VCF) after the treatment, hence it is necessary to identify possible risk groups prior to performing SBRT. Several studies have been made to identify the risk factors, including spinal instability neoplastic score (SINS), dose fractionation, and radiomics. However, no studies have attempted to predict VCF occurrence by direct usage of patients’ pretreatment CT images. In this study, we propose a multi-modal deep network for risk prediction of VCF after SBRT that uses clinical records, CT images, and radiotherapy factors altogether without explicit feature extraction. The retrospective study was conducted on a cohort of 131 patients who received SBRT for spinal bone metastasis. We classified the risk factors into three categories: clinical factors, anatomical imaging factors, and radiotherapy factors. 1-D vectors were generated from clinical factors after a proper standardization. We cropped 3-D patches of the lesion area from pretreatment CT images and treatment planning dose images. We used data augmentation with translation and rotation in the sagittal plane based on the characteristics of the S-shaped spine to supplement the limited size of our available dataset. Numerical variables from radiotherapy factors are standardized along with the clinical feature vector. We designed a three-branch deep learning network with the aforementioned three factors as inputs. From the k-fold validation and ablation study, our proposed network showed performance with an area under the curve (AUC) of 0.7605 and an average precision (AP) of 0.7273. The results show an improvement over other unimodal comparison models. The prediction model would play a valuable role not only in the treated patients’ welfare but also in the treatment planning for those patients.
PurposeAlthough there are several options for improving the generalizability of learned models, a data instance-based approach is desirable when stable data acquisition conditions cannot be guaranteed. Despite the wide use of data transformation methods to reduce data discrepancies between different data domains, detailed analysis for explaining the performance of data transformation methods is lacking.ApproachThis study compares several data transformation methods in the tuberculosis detection task with multi-institutional chest x-ray (CXR) data. Five different data transformations, including normalization, standardization with and without lung masking, and multi-frequency-based (MFB) standardization with and without lung masking were implemented. A tuberculosis detection network was trained using a reference dataset, and the data from six other sites were used for the network performance comparison. To analyze data harmonization performance, we extracted radiomic features and calculated the Mahalanobis distance. We visualized the features with a dimensionality reduction technique. Through similar methods, deep features of the trained networks were also analyzed to examine the models’ responses to the data from various sites.ResultsFrom various numerical assessments, the MFB standardization with lung masking provided the highest network performance for the non-reference datasets. From the radiomic and deep feature analyses, the features of the multi-site CXRs after MFB with lung masking were found to be well homogenized to the reference data, whereas the others showed limited performance.ConclusionsConventional normalization and standardization showed suboptimal performance in minimizing feature differences among various sites. Our study emphasizes the strengths of MFB standardization with lung masking in terms of network performance and feature homogenization.
While breast density is known as one of the critical risk factors of breast cancer, Digital breast tomosynthesis (DBT)-based diagnostic performance is known to have a strong dependence on breast density. As a potential solution to increase the diagnostic performance of DBT, we are investigating dual-energy DBT imaging techniques. We estimated partial path lengths of an x-ray through water, lipid, and protein from the measured dual-energy projection data and the object thickness information. We reconstructed material-selective DBT images for the material-decomposed projection. The feasibility of the proposed dual-energy DBT scheme has been demonstrated by using physical phantoms.
This work addresses equalization and thickness estimation of breast periphery in digital breast tomosynthesis (DBT). Breast compression in DBT would lead to a relatively uniform thickness at inner breast but not at the periphery. Proper peripheral enhancement or thickness correction is needed for diagnostic convenience and for accurate volumetric breast density estimation. Such correction methods have been developed albeit with several shortcomings. We present a thickness correction method based on a supervised learning scheme with a convolutional neural network (CNN), which is one of the widely-used deep learning structures, to improve the pixel value of the peripheral region. The network was successfully trained and showed a robust and satisfactory performance in our numerical phantom study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.