Imaging under low-light conditions is a challenging but important problem due to low dynamic range, image noise, and blurriness. In this work, we propose blur-free low-light imaging techniques by combining a conventional color camera with an event camera. The event camera complements the color camera by measuring brightness changes asynchronously at high speed with high dynamic range. We synchronize the two sensors with external trigger cable. We align the viewpoints of the event and color using a beamsplitter. We co-calibrate the two cameras geometrically. We derive an image formation model and use the inverted model to reduce the blurriness in color images. Experimental results demonstrate the effectiveness of our method.
Knee osteoarthritis (OA) is a prevalent and disabling degenerative joint disease. Objectively identifying knee OA severity is challenging given significant inter-reader variability due to human interpretation factors. The Kellgren-Lawrence (KL) grading system is a commonly used scale to quantitatively characterize the severity of knee OA in knee radiographs. It is important to reliably identify severe knee OA since total knee arthroplasty (TKA) can provide significant improvement in patient quality of life for patients with severe knee OA. In this study, we demonstrate a deep learning approach to automatically assessing KL grades. Our approach uses faster R-CNN object detection network to identify the knee region and deep convolutional neural network for classification. We used a dataset of 7962 knee radiographs for each posteroanterior (PA) and lateral (LAT) views, to develop and evaluate our approach. Images with their corresponding KL grades were obtained from the Multicenter Osteoarthritis Study (MOST) dataset. Our network showed multi-class classification accuracy of 69.15 % when the assessment was made based on PA views and accuracy of 56.68 % when LAT views were used. The developed network may play a significant role in surgical decision-making regarding knee replacement surgery.
Deep learning has achieved great success in image analysis and decision making in radiology. However, a large amount of annotated imaging data is needed to construct well-performing deep learning models. A particular challenge in the context of breast cancer is the number of available cases that contain cancer, given the very low prevalence of the disease in the screening population. The question arises whether normal cases, which in the context of breast cancer screening are available in abundance, can be used to train a deep learning model that identifies locations that are abnormal. In this study, we propose to achieve this goal through the generative adversarial network (GAN)-based image completion. Our hypothesis is that if a generative network has a difficulty to correctly complete a part of an image at a certain location, then such a location is likely to represent an abnormality. We test this hypothesis using a dataset of 4348 patients with digital breast tomosynthesis (DBT) imaging from our institution. We trained our model on normal only images, to be able to fill in parts of images that were artificially removed. Then, using an independent test set, at different locations in the images, we measured how difficult it was for the network to reconstruct an artificially removed patch of the image. The difficulty was measured by mean squared error (MSE) between the original removed patch and the reconstructed patch. On average, the MSE was 2.11 times higher (with standard deviation equal to 1.01) at the locations containing expert-annotated cancerous lesions than that at the locations outside those abnormal locations. Our generative approach demonstrates a great potential for using this model to aid breast cancer detection.
Evaluating the severity of knee osteoarthritis (OA) accounts for significant plain film workload and is a crucial component of knee radiograph interpretation, which informs surgical decision-making for costly and invasive procedures such as knee replacement. The Kellgren-Lawrence (KL) grading scale systematically and quantitatively assesses the severity of knee OA but is associated with notable inter-reader variability. In this study, we propose a deep learning method for the assessment of joint space narrowing (JSN) in the knee, which is an essential part of determining the KL grade. To determine the extent of JSN, we analyzed 99 knee radiographs to calculate the distance between the femur and tibia. Our algorithm's measurements of JSN and KL grade correlated well other radiologists' assessments. The average distance (in pixels) between the femur and tibia bones as measured by our algorithm was 9.60 for KL=0, 7.60 for KL=1, 6.89 for KL=2, 3.75 for KL=3, 1.25 for KL=4. Additionally, we used 100 manually annotated knee radiographs to train the algorithm to segment the femur and tibia bones. When evaluated on an independent set of 20 knee radiographs, the algorithm demonstrated a Dice coefficient of 96.59%. An algorithm for measurement of JSN and KL grades may play a significant role in automatically, reliably, and passively evaluating knee OA severity and influence and surgical decision-making and treatment pathways.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.