PET-CT scans using 18F-FDG are increasingly used to detect cancer, but interpretation can be challenging due to non-specific uptake and complex anatomical structures nearby. To aide this process, we investigate the potential of automated detection of lesions in 18F-FDG scans using deep learning tools. A 5-layer convolutional neural network (CNN) with 2x2 kernels, rectified linear unit (ReLU) activations and two dense layers was trained to detect cancerous lesions in 2D axial image segments from PET scans. Pre-contoured scans from a retrospective cohort study of 480 oesophageal cancer patients were split 80:10:10 into training, validation and test sets. These were then used to generate a total of ~14000 45×45 pixel image segments, where tumor present segments were centered on the marked lesion, and tumor absent segments were randomly located outside the marked lesion. ROC curves generated from the training and validation datasets produced an average AUC of ~<95%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.