SignificanceAccurate cell segmentation and classification in three-dimensional (3D) images are vital for studying live cell behavior and drug responses in 3D tissue culture. Evaluating diverse cell populations in 3D cell culture over time necessitates non-toxic staining methods, as specific fluorescent tags may not be suitable, and immunofluorescence staining can be cytotoxic for prolonged live cell cultures.AimWe aim to perform machine learning-based cell classification within a live heterogeneous cell culture population grown in a 3D tissue culture relying only on reflectance, transmittance, and nuclei counterstained images obtained by confocal microscopy.ApproachIn this study, we employed a supervised convolutional neural network (CNN) to classify tumor cells and fibroblasts within 3D-grown spheroids. These cells are first segmented using the marker-controlled watershed image processing method. Training data included nuclei counterstaining, reflectance, and transmitted light images, with stained fibroblast and tumor cells as ground-truth labels.ResultsOur results demonstrate the successful marker-controlled watershed segmentation of 84% of spheroid cells into single cells. We achieved a median accuracy of 67% (95% confidence interval of the median is 65-71%) in identifying cell types. We also recapitulate the original 3D images using the CNN-classified cells to visualize the original 3D-stained image’s cell distribution.ConclusionThis study introduces a non-invasive toxicity-free approach to 3D cell culture evaluation, combining machine learning with confocal microscopy, opening avenues for advanced cell studies.
I will discuss the emerging trend in computational imaging to train deep neural networks (DNNs) for image formation. The DNNs are trained from examples consisting of pairs of known objects and their corresponding raw images drawn from databases such as ImageNet, Faces-LFW and MNIST. The raw images are converted to complex amplitude maps and displayed on a Spatial Light Modulator (SLM.) After training, the DNNs are capable of recovering unknown objects, i.e. objects not previously included in the training sets, from the raw images in several scenarios: (1) phase objects retrieved from intensity after lensless propagation; (2) phase objects retrieved from intensity after lensless propagation at extremely low photon counts; and (3) amplitude objects retrieved from intensity in-focus after propagation through a strong scatterer. Recovery is robust to disturbances in the optical system, such as additional defocus or various misalignments. This suggests that DNNs may form robust internal models of the physics of light propagation and detection and generalize priors from the training set. In the talk I will discuss in more detail various methods to incorporate the physics into DNN training, and how DNN architecture and “hyper-parameters” (i.e., depth, number of units in each depth, presence or absence of skip connections, etc.) influence the quality of image recovery.
In a recent paper [Goy et al., Phys. Rev. Lett. 121, 243902, 2018], we showed that deep neural networks (DNNs) are very efficient solvers for phase retrieval problems, especially when the photon budget is limited. However, the performance of the DNN is strongly conditioned by a preprocessing step that consists in producing a proper initial guess. In this paper, we study the influence of the preprocessing in more details, in particular the choice of the preprocessing operator. We also empirically demonstrate that, for a DenseNet architecture, the performance of the DNN increases with the number of layers up to a point after which it saturates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.