Purpose: The purpose of our review paper is to examine many existing works of literature presenting the different methods utilized for diabetic retinopathy (DR) recognition employing deep learning (DL) and machine learning (ML) techniques, and also to address the difficulties faced in various datasets used by DR. Approach: DR is a progressive illness and may become a reason for vision loss. Early identification of DR lesions is, therefore, helpful and prevents damage to the retina. However, it is a complex job in view of the fact that it is symptomless earlier, and also ophthalmologists have been needed in traditional approaches. Recently, automated identification of DR-based studies has been stated based on image processing, ML, and DL. We analyze the recent literature and provide a comparative study that also includes the limitations of the literature and future work directions. Results: A relative analysis among the databases used, performance metrics employed, and ML and DL techniques adopted recently in DR detection based on various DR features, is presented. Conclusions: Our review paper discusses the methods employed in DR detection along with the technical and clinical challenges that are encountered, which is missing in existing reviews, as well as future scopes to assist researchers in the field of retinal imaging. |
1.IntroductionThe eyes, an organ of sight, are the most important organ of our body and have several components. The retina senses light and creates electrical impulses that correlate with the brain to handle the pictorial data. There are numerous eye diseases resulting in vision loss but the most common one is diabetic retinopathy (DR), which is related to diabetes. Each person having diabetes is at risk of developing DR. It is found that roughly one in three individuals having diabetes has DR to some extent.1 Biomedical imaging is a powerful way to obtain a visual representation of the internal organs of the body for clinical purposes or for study of anatomy and physiology. Over the past few decades, there has been an exponential increase in diabetes-caused diseases. In 2014, there existed ~422 million people having diabetes, in contrast to 108 million people in 1980 (World Health Organization, Global Report on Diabetes). According to the International Diabetes Federation, widespread appearance of DR from 2015 to 2019 was .2 The destructing outcome of diabetes mellitus is pursued as an effect of an anticipated rise in appearance from 463 million in 2019 to 700 million in 2045.3 DR is a disorder that harms the blood vessels that are present internally in the retina. In general, there are two stages of DR: (1) non-proliferative DR (NPDR) and (2) proliferative DR (PDR), as shown in Fig. 1. NPDR, which is the primary stage, advances in three types: mild NPDR, moderate NPDR, and severe NPDR, as shown in Fig. 1. Initially, balloon-like swelling of blood vessels occurs in the retina; these small swellings, known as “microaneurysms” (MAs), leak fluid into the retina. When they burst and produce minute blood spots, these are known as “hemorrhages” (HEMs). As it further advances, the fluid and protein that has caused the swelling leaks from the injured blood vessels; these are known as “exudates” (EXs). Basically, there are two types of EXs: soft EXs and hard EXs. Hard EXs appear as bright yellow waxy patches in the retina, whereas soft EXs have a white fuzzy appearance and paler yellow areas with distracted edges. Mild NPDR is the primary level of DR. It is specified with one or more MAs, and may or may not have any EXs or HEMs. Normally, of diabetic patients have a slightly mild NPDR indication.4 Moderate NPDR is a progressive level and it specifies that there exists a number of MAs and HEMs. A study by Faust et al.5 revealed that ~16% of the patients possessing moderate NPDR are likely to produce PDR during a year. Severe NPDR is a severe condition where there are several characteristics by which a severe NPDR is identified.6 There is almost a 50% chance that severe NPDR can turn into PDR over a year.5 PDR is the liberal level where deficiency of oxygen in the retina causes development of new, delicate blood vessels in the retina and vitreous, where the gelatinous fluid occupies the back of the eye.7 The different stages of DR, such as mild NPDR, moderate NPDR, severe NPDR, and PDR, are shown in Fig. 2. Plenty of people harmed by DR do not visit an eye-care professional unless the DR situation extends to the severe NPDR or PDR stage. Also the traditional measures to identify DR involve ophthalmologists for assessment and diagnosing capability, which is time-consuming and very costly work. Hence, it became crucial to present efficient DL-based methods. Several review papers presented below have given attention to various techniques of DR detection, such as image processing, ML, and DL techniques. Image preprocessing is an essential measure to reduce the noise from images and to improve image characteristics. Image preprocessing methods for instance green channel, image normalization, histogram equalization, and morphological operations, and classification methods are briefly presented in Ref. 8. A three-stage computer-aided screening system that detects and grades retinal images for DR using ML algorithms, which include Gaussian mixture model, -nearest neighbor (KNN), and support vector machine (SVM), was presented in Ref. 9. The comparison of seven papers for DR screening was done in Ref. 10, based on preprocessing, classifier, image processing technique used, etc. Previously presented review papers give rough ideas on DR classification methods using the DL approach.11 This field of research has been thoroughly examined in the last few years, and various techniques have been proposed. But none of the papers include the deep learning (DL) methodologies along with the DR detection and segmentation methods. This review paper includes the utility of renowned databases, DL methods, and performance estimation measures thoroughly. The significant goals of this review are as follows:
The remainder of the paper is arranged as follows. The most used available datasets are discussed first in Sec. 2. Then the outline of DL methods along with convolutional neural networks (CNNs) are reviewed in Sec. 3. The previously conducted study on the topic of DR detection and segmentation methods is discussed in Sec. 4. After that, Sec. 5 includes some of the latest commercially accessible AI DR systems. Then, the performance metrics are analyzed in Sec. 6, along with formulas and descriptions. Finally, in Sec. 7, the technical and clinical challenges are discussed, as well as a few feasible forthcoming pieces of research. 2.Datasets AvailableDatasets are collections of information that can be gathered by observations, measurements, research, or analysis. Several datasets are available for the retina to distinguish between DR and non-DR. There are two types of retinal imaging: optical coherence tomography photographs and retinal color photographs. A detailed comparison of various publicly available fundus image databases in the field of retinal imaging has been made in Table 1. Table 1Details of most commonly used datasets.
It is found that most of the blood vessel technique uses STARE, DRIVE, and CHASE DB1. The DRIVE database offers a mask for each photograph to assist the recognition of the field of view (FOV) region. Contrary to DRIVE, STARE does not provide masks to identify the FOV. STARE dataset is the more complicated dataset among all the others since some of those pathological photographs undergo reduced sharpness and depreciation because of eye ailment. HRF dataset is mostly neglected in vessel fragmentation reports since it is comparatively new, and the resolution of photographs is roughly four times greater than STARE and DRIVE datasets. Accessible retinal datasets lack adequate training photographs; thus the network overfits training databases in some situations. Such datasets should be made progressive. The Kaggle dataset is utilized in the studies to classify DR stages. The datasets such as DIARETDB0, DIARETDB1, and E-OPHTHA are generally employed for the detection of MAs and EXs. In recent times, 13 duplicated photograph pairs were observed and certain discrepancies in the rating of photographs of the MESSIDOR database were stated, which the database providers have acknowledged on their website. In DDR database, there are 1151 images that remain ungradable. Generally, the public datasets are employed for experimentation purposes as these are easily available and images are of high resolutions. Although, the decision of selecting the datasets purely relies on the type of issue and the technique employed by researchers. 3.Deep Learning MethodsDL has been broadly employed in DR recognition and classification, and there are various other applications of DL that involve image identification, bioinformatics, and medical image analysis, etc. DL can effectively discover the characteristics of input data despite the fact that several miscellaneous sources merged.27 A deep family of DL methods has been established since 1990, for instance, deep neural networks (DNN),28 autoencoders (AEs) and stacked autoencoders,29 CNN,30 restricted Boltzmann machines (RBM),31 and deep belief network (DBN).32 Among these, a deep CNN illustrates better execution on a diversity of tasks in image processing/signal processing and computer vision. The DL methods, such as AE, RBM, and DBN, can be utilized as generative models or unsupervised learning. Singh and Kaushik33 presented an innovative dual-stage deep CNN model, which uses de-noising AE as one of the preprocessing stages in DR classification before passing the images into the CNN model. Reference 34 represented a DL-based technique for denoising of the retinal images and revival of the features using stack denoising convolutional AE. The main benefit of the proposed AE model is that it is pretty quick with respect to conventional AE and recovers the badly degraded retinal photographs with a sharp edge, tiny vessels, and texture details. A stacked AE-based DL scheme for identification of type 2 diabetes database was performed on Pima Indians Diabetes data35 obtaining 86.26% accuracy. DL-based variational AE has been suggested36 for retinal image restoration, which aids in increasing the clarity of retinal pictures by minimizing noise employing a deep training model without losing picture information along with having a higher convergence rate. An AE DL system on the basis of the residual path and U-net has been proposed by Adarsh et al.,37 which is capable of obtaining more particular details that resulted in efficient fragmentation of the retinal blood vessels obtaining 95.63% accuracy. Ortiz et al.38 introduced a technique on the basis of a deep multi-channel convolutional AE for blood vessel segmentation to prevent the dependence on the preprocessing stage obtaining 95% accuracy on the DRIVE database. Basha and Ramanaiah39 proposed a new DR detection system with four stages that include (a) preprocessing, (b) blood vessel segmentation, (c) feature extraction, and (d) classification where DBN classifier was utilized to categorize the retrieved characteristics into a healthy and unhealthy image. The ground truth of the introduced method was higher when weighting factor ss = 0.8, which is 10.01%, 23.41%, and 55.13% better than the accuracy rate when ss = 1.2, 1.0, and 1.5, respectively. A 2D DBN that depends on mixed-RBM that was able to obtain several 2D inputs and automatically extracts the relevant characteristics to identify the progression degree of DR was presented by Tehrani et al.,40 achieving an area under curve (AUC) value of 92.31%. Syahputra et al.41 presented a DBN network that can detect DR through retinal pictures that undergo preprocessing by means of grayscale, contrast stretching, median filter, morphological close operation, and feature extraction utilizing the gray level counseling matrix (GLCM), obtaining 84% accuracy, 93% sensitivity, and 70% specificity. Supervised learning involves artificial neural networks (ANNs) and CNNs. Hard EXs42 were detected by implementing various image processing methods and were classified using a discriminative learning technique involving SVMs and some NN techniques. Chakraborty et al.43 introduced a system in accordance with supervised learning employing ANN, i.e., feed-forward back propagation neural network to accomplish more correct detection results for the instance of DR varying error goal, number of neurons, and number of hidden layers, achieving an overall accuracy of 97.13%. The CNNs have been broadly used in DL methods and highly effective due to their excellent accomplishment shown in computer vision and their capability to perform in parallel with GPUs,44 as shown in Fig. 3. Various models of CNN are already available. Patel and Patel46 gave a complete review of several CNN architectures such as LeNet-5, AlexNet, ZFNet, GoogleLeNet, VGGNet-16, ResNet50, and their applications for computer vision tasks. Gayathri et al.47 introduced CNN architecture to extract the characteristics from retinal fundus images of MESSIDOR, Indian Diabetic Retinopathy Image Dataset (IDRID), and KAGGLE datasets for binary and multiclass classification of DR and achieves an accuracy of 99.89% and 99.59%, respectively. In Ref. 48, a deep CNN with 18 convolutional layers and 3 fully connected layers was introduced by Shaban et al. to examine retinal photographs and systematically discriminate between no DR, moderate DR, and severe DR with a validation accuracy, sensitivity, specificity, quadratic weighted kappa score of 88% to 89%, 87% to 89%, 94% to 95%, and 0.91 to 0.92, respectively, when both 5-fold and 10-fold cross validation approaches were utilized, respectively. A CNN model49 for the classification of healthy and unhealthy retina images was presented based on the infection of blood vessels. The method was tested on DiaretDB0, DiaretDB1, and DrimDB; and the best accuracy that was achieved: for DiaretDB0 was 100%, DiaretDB1 was 99.495%, and DrimDB was 97.55%. Samanta et al.50 introduced a transfer learning on the basis of CNN architecture to categorize pictures from a small and skewed database of 3050 training pictures belonging to four classes and achieved a Cohens kappa score of 0.8836 on the validation set along with 0.9809 on the training set. DL architecture on the basis of a recent CNN called EfficientNet51 was presented by Chetoui and Akhloufi to detect referable DR and vision-threatening DR achieving an AUC of 0.984 for RDR and 0.990 for vision-threatening DR on EyePACS dataset. A deeply supervised ResNet technique has been presented by Zhang et al.52 to classify the severity of DR automatically obtaining around 80% accuracy but the network encounter issues when categorizing the classes of mild and moderate DR. DNN is an example of hybrid architecture. Lahmiri53 proposed a CNN-SVM model that can obtain and figure out a discriminative characteristic that is a feature of standard HEM templates. In Ref. 54, a hybrid DL-based method for the identification of DR in retinal images of the EYEPACS dataset has been introduced by combining CNN with SVM (CNN + SVM). It is observed that less images were misclassified (FP + FN) when we use CNN + linear SVM in comparison to CNN + Softmax. DNN-random forest (RF) hybrid architecture55 for the handling of color retinal photographs of the DRIVE database has been proposed by Maji et al. for the recognition of coarse and fine vessels where the accuracy was 93.27%. This approach does not achieve the highest accuracy but it does exhibit uniqueness in the ability to learn representations of vessel appearance model. It is observed that the supervised learning models are used for detection and segmentation of DR since they are very simple models. However, while training them, it is hard to enforce them to achieve the desired output, which is the limitation of supervised learning. On the other hand, unsupervised learning can build models if provided with enough training samples with higher discriminative power than supervised models. In retinal imaging applications, hybrid learning emerges either due to the provision of ample test images with no labels, or more significantly by chance that there is no assurance about the labels. The semisupervised features are more stable with reference to performance although they are provided into a supervised classifier. 4.DR Feature-Based Detection and Segmentation TechniquesGenerally, the detection and classification of DR images using DL are initiated by data collection and by employing the necessary preprocessing to advance and boost the images. Afterward, this is provided to the DL method to achieve the characteristics and to sort the images. As discussed before, the DR features include MAs, HEMs, EXs, and blood vessels, which perform a vital role in the detection of DR. Based on these DR features, the thorough analysis concerned with DR recognition and fragmentation approaches have been presented in the subsequent subdivisions. Figure 4 depicts the DR recognition and fragmentation techniques as per the various retinal characteristics. 4.1.Blood Vessel Segmentation TechniquesFu et al.56 developed the retinal blood veins segmentation issue as an edge identifying job and figure out it by employing a unique DL framework. They combined the CNN and conditional random field (CRF) layers into a unified deep network known as deep vessel. A deep NN-based system was introduced for retinal blood vein segmentation57 where the preprocessing was done with zero-phase element study and global contrast standardization, and augmentation was done by using gamma correction and geometric transmutation. The presented technique attained an overall ground truth of 94% and results can be increased by adopting different training parameters such as learning rate. Zhang et al.58 introduced a CNN model that utilizes both the bottom and top-level characteristics and employs atrous convolution to achieve multilevel hallmarks. The method is tried on three typical parameters and the experiment showed that the method considerably surpasses the method in Ref. 57. Hu et al.59 presented a CNN and CRFs founded technique for fragmentation of retinal vessels. To produce a probability map and to acquire more information about retinal vessels, a multiscale CNN is used in the first step of segmentation, whereas in the second step, CRFs are used for final binary segmentation, which helps in detecting cardiovascular edges. A three-simple preprocessing approach introduced by Samuel and Veeramalai60 was utilized to emphasize the blood vessels before the DNN training was done. The presented multiscale DNN partitioned the retinal blood vessels without the aid of input-controlled blood vessel characteristics. The system gains good sensitivity with satisfactory specificity and accuracy. Oliveira et al.61 carried out segmentation of retinal vessels using stationary wavelet transform (SWT) along with multiscale fully CNN where the vessel structure direction and variable width is handled by the system proposed. Also the system is fitted for a rugged training-base, speedy GPU application, and interrater inconstancy. A deep supervision and smoothness regularization network (DSSRN) was introduced for fundus veins fragmentation62 where the technique was generated in association with a holistically nested edge detector employing a VGG system. The progressive accuracy is obtained, but the average sensitivity obtained is inadequate in contrast to other traditional methods, but by fine-tuning the system, it can be boosted. Wang et al.63 introduced a dense U-Net that depends on a patch-based training technique for fundus vein fragmentation. While learning the system, the training patches are acquired aimlessly and the training model was utilized to estimate test patches. By carrying out the superimposed-patches serial technique, the segmentation was done. In Ref. 64, an FCN-dependant system named structured dropout U-Net, i.e., SD U-Net was presented to decrease the overfitting issue of U-Net that also improves the capability of segmentation of blood vessels. The result achieved surpasses the method in Ref. 63 as shown in Table 2. Table 2Blood vessel segmentation methods.
Jin et al.72 proposed a deformable U-Net (DUNet) system that depends on CNN for fundus vein fragmentation that utilized the localized characteristics of retinal veins. The upsampling operators are employed to enhance the throughput resolution, to record the context-specific details, and to assist particular regionalization by merging both high- and low-level characteristics. The method gathers the retinal veins at several levels by balancing the receptive field suitably. In Ref. 65, a lightweight model called spatial attention U-Net was introduced that gathers the attention maps along the spatial dimension. The result achieved surpasses the latest methods.57,72 A generative adversarial network had been developed with a U-Net style generator and various discriminators to support the proficiency of the CNN model.75 It is possible to increase the result of segmentation by merging the anatomical understanding of the configuration and the optic nerve head and also preprocessing can be applied for noise removal. A multilevel deeply monitored NN with a bottom-top short association (BTS-DSN) was introduced in Ref. 73, where the system utilized short associations along with ResNet-101 and VGGNet framework and attained an AUC value of 0.9859/0.9806 on STARE and DRIVE, respectively. Samuel and Veeramalai68 partitioned the blood veins from the retinal images as well as from the coronary angiogram using a single VSSC Net. It is examined that VSSC Net produces better accuracy and the processing time needed to partition the blood veins is 0.2 s with the help of GPU. Additionally, the vein extraction level utilizes a minor parameter rate of 0.4 million parameters to correctly partition the retinal veins. In Ref. 74, a method using round-wise features aggregation on bracket-shaped CNNs (RFA-BNet) was proposed to eliminate the requirement of patches augmentation while successfully addressing the irregular and diverse illustration of retinal vessels. The bracket-style decoding manner blending with thorough aggregation between decoded feature maps of highest-resolution facilitates the proposed RFA-BNet to spot vessels’ location flexibly and precisely at pixel level.70 Densely connected and concatenated multi-encoder–decoder (DCCMED) comprises multiple linked encoder–decoder CNNs and compactly associated transition layers. To strengthen the generalization potentiality of the network, data augmentation technique on the basis of patches was utilized. In Ref. 69, network followed network (NFN+) method was presented to successfully take out multiscale details and fully utilize deep feature maps. NFN+ method employs the cascaded design and internetwork skip connections to partition retinal vessels more precisely with the purpose of improving the segmentation accuracy but it remains incapable of ensuring the linkage of the partitioned retinal vessels. A method based on the multipath CNN was proposed in Ref. 71, where using a Gaussian filter, the low-frequency image and the high-frequency image were obtained and provided to the constructed multipath CNN achieving the final segmentation map. A model dependent on supervised learning was introduced by Tamim et al.67 that utilizes a hybrid feature vector and a multilayer perceptron NN where a 24D vector of features was constructed for each and every pixel. Wang et al.63 proposed a system using Zernike moment-based shape descriptors. The system conducts a categorization by estimating an 11D vector of features consisting of statistical and shape-based features. The method employed DRIVE and STARE databases achieving accuracies of 0.945 and 0.9486, respectively. Atli and Gedik66 introduced a DL framework for fully automated blood vein segmentation. They proposed a method, known as Sine-Net, which initially makes use of upsampling and later proceeds with down-sampling layers. The detailed analysis of databases, techniques employed for blood vessel segmentation, and performance is described in Table 2. Sine-Net66 is and 3.14 times faster than the work of Liskowski.57,59 Contrary to work in Ref. 57, in which preprocessed picture patches are employed for training the CNN, Sine-Net66 anticipates all pixels at once rather than employing a patch to anticipate the central pixel alone. The vein sections 68 at the crossing points in the DCA1 are fragmented with large preciseness by the VSSC Net in comparison to DUNet,72 which accomplishes relatively higher performance due to the initiation of deformable convolutional blocks in the U-Net. The multiscale FCN with deep monitoring and enhanced loss function59 missed a few of the thin veins but the deep supervised FCN with short connections raises the execution of the multiscale vessel fragmentation.73 The multilevel and multiscale FCN60 attains maximum sensitivity values for the STARE dataset. Minimum parameter count is acquired for the persistent multiscale FCN presented by Hu et al.,59 and patch input fed FCN suggested by Oliveira et al.61 and Jin et al.72 The inappropriate management of thin veins by the FCN61 and CNN57 could be considered. The CNN in Refs. 57 and 72 is compute-intensive as a result of the patch-based technique utilized to partition the vessels. If patch input is provided to the FCN framework,72,73 it requires a long time since the recombining occurs at last. 4.2.Microaneurysm Detection TechniquesHabib et al.76 proposed a unique merging of algorithms operated on a public database for computerized identification of MA in color retinal photographs of the retina. The presented method primarily identifies an initial set of candidates and then classifies it. Detection was performed utilizing a Gaussian matched filter and classification by using the RF ensemble classifier. Sreng et al.77 presented a method to find an MA on the basis of its characteristics in retinal images. Initially, preprocessing such as grayscale conversion, inverse discrete 2D wavelet transforms, and medians filter was done to decrease noise from the image and to increase the contrast. Segmentation with the aid of canny edges detection and maximum entropy thresholding was performed. Finally, the morphological process was done to outline these indications. The outcome was examined by an ophthalmologist and as per the outcome, the accuracy was 90% and the normal running time was 9.53 s per image. Kumar and Kumar78 introduced a DR identification strategy by extracting the precise area and number of MA from color retinal pictures. MA identification utilizes CLAHE, principal component analysis (PCA), morphological method, averaging filtering, and also SVM classifiers. The sensitivity and specificity of the DR detection system are observed as 96% and 92%, respectively. Xu et al.79 introduced a parallel technique to distinguish MAs turnover relying on the subsequent retinal photographs and longitudinal medical conditions. The presented computerized study on MAs turnover merging two dissimilar techniques can considerably strengthen the condition and so was primary for the filtering of a huge diabetic patient population for DR. The results on the Grampian diabetes database prove that the presented photograph analysis technique obtained a 94% sensitivity and 93% specificity, whereas the classification method obtained sensitivity and specificity of 89% and 88%, respectively. Dai et al.80 presented a unified clinical report directed method where specialist learning from clinical text reports was extracted by means of text mining method and map visual features. Keywords integration from text reports and retinal images aids to enhance the accuracy of detection with assuring achievement with regards to precision and recall. Cao et al.81 studied MA recognisability using a little area retrieved from retinal images of the DIARETDB1 database. Raw pixel magnitudes of the retrieved area were straightaway fed as input to a NN, RF, and SVM. PCA and RF were utilized for decreasing dimensionality of input and; with the help of leave-10-patients-out cross-validation and conventional ML approaches, the system AUC parameter increased from 0.962 to 0.985. Hatanaka et al.82 proposed an MA detector that merges three current detectors that include the double-ring filter, shape index based on the Hessian matrix, and Gabor filter. The introduced model is designed with a two-layer DCNN and three-layer perceptron. In the two-level DCNN, the early DCNN is for primary MA identification and the secondary DCNN is for FPs reduction. The technique was performed on the DIARETDB1 database and obtained the sensitivity of 84% of 8.0 FPs per image. Gupta et al.83 proposed a DL technique to distinguish the lesions with higher accuracy. To fulfill the work, DL model VGG19 was trained on the IDRID database to retrieve the characteristics from the color retinal eye images. These characteristics are then delivered into distinct classifiers, such as logistic regression, SVM, and KNN, to distinguish the lesions accurately. The ground truth for classifying MA photographs was 96.7% using LR. Eftekhari et al.84 introduced a model that embeds a unified method employing a two-level approach with two datasets that outcomes in correct detection. The outcomes revealed a suitable sensitivity of about 0.8 for an ordinary of FPs per image. Qiao et al.85 introduced the prognosis of MA and early diagnostics system for NPDR competent of efficiently making DCNNs for the fragmentation of retinal images that can enhance NPDR recognition effectively. Here a PCA-based method for detecting MA was introduced. After that, any variation from the conventional MA is identified by statistical surveillance; a scarce PCA is utilized to discover the hidden pattern of MA data. An MA detection framework using ML based on directional local contrast (DLC) was introduced by Long et al.86 Initially, blood vessels were improved and fragmented using a function on the basis of examining eigenvalues of the Hessian matrix. After the blood vessels got excluded, MA candidate areas were achieved employing shape features and linked components analysis. After image segmentation, the characteristics of each and every MA candidate patch were retrieved and were categorized into MA or non-MA FROC scores of 0.374 and 0.210 on the two databases, respectively. In Ref. 87, a method was introduced for the recognition of MAs on fundus images by studying directional cross-sectional profiles. The number of pixels to be analyzed is mainly decreased by considering the local maximum intensity pixels. The peak is identified and its features are taken into account for the realization of feature space. The performance parameters sensitivity, specificity, and accuracy achieved were 94.59%, 96.56%, and 95.80%, respectively, for E-optha MA database. Mazlan et al.88 proposed a computerized identification of MAs in the retinal images. The method involves preprocessing on image, segmentation using H-maxima and thresholding approach, postprocessing, feature extraction, and classification phases. The MLP classifier attained the highest result of 92.28% when compared with the SVM of 89.08%. The relative analyses of the database, techniques employed for MA detection, and their performance are stated in Table 3. Table 3MAs detection methods.
ML-based DLC86 is a simple, time saving method in contrast to image segmentation.87 The multisieving DL80 and CNN84 solve the issue of imbalance dataset. The pixel intensity rank transform method greatly reduced the FP rate when compared with Eftekhari et al.84 The multilevel thresholding and multilayer perceptron method perform fast even when the FP rate is high, contrary to the method by Hatanaka et al.82 For detection of NPDR only, the method presented in Ref. 78 works best. 4.3.Hemorrhage Detection TechniquesA unified technique for the computerized identification of the HEMs from the retinal pictures was presented by Kaur et al.89 The paper presented an adjustable and effective technique for HEMs identification. The research includes the study of 4546 blobs from 50 retinal pictures picks up from the database. A united technique of morphological process and RF-based classification was utilized. In Ref. 90, a technique to enhance and boost the CNN training for clinical image review by energetically choosing incorrectly classified negative samples during training was proposed. Weights are allocated to the training samples and valuable samples are more possible to be involved in the further CNN training repetition. The presented technique was evaluated and compared by training a CNN by means of a selective sampling technique. Xiao et al.91 introduced a unique HEMs identification model on the basis of ML where the authors emphasized the advancement of identification of the HEMs, which are near to or linked with retinal veins. A precursory test was carried out on the photographs from two datasets where one achieved 93.3% sensitivity and 88% specificity and the other 91.9% sensitivity and 85.6% specificity. Gargeya and Leng92 presented a DL-based computerized feature learning for DR identification where the method handled color retinal pictures and categorized them as without DR or with DR. The method attained an AUC of 0.97; having a sensitivity of 94% and specificity of 98%, on fivefold cross validation employing the database MESSIDOR 2 and E-Ophtha. Godlin Atlas and Parasuraman93 reviewed HEMs detection in fundus photographs with the help of classifier and segmentation methods. The photographs were fed to the preprocessing steps and significant features were taken from the photographs. After that, an ANFIS classifier and a modified region growing model were utilized to achieve greater accuracy of 92.56%. Murugan94 proposed an effective motion pattern generation technique to identify HEMs. The innovativeness of the technique is to decrease the dimensionality span in accordance with picture resoluteness thereby, strengthening the fast track of the HEM identification. MATLAB was utilized for execution purposes, and validation was done on the MESSIDOR database. The presented technique achieved superior execution assessment unlike other boundary techniques In Ref. 95, the technique especially emphasizes on retrieval of blood vessel patterns and HEMs using object detection method where green channel images were extracted from RGB images for preprocessing. After segmenting the objects, the local binary pattern features were categorized as HEMs and non-HEMs. The database utilized was IDRID and 92.31% ground-truth was achieved. A supervised ML method on the basis of retinal HEM detection and classification was introduced in Ref. 96. Splat was utilized to identify HEM in the preprocessed retinal picture. In this, color pictures of the retina are partitioned into various segments involving the entire picture. By means of splat level and GLCM characteristics obtained from the splats, three classifiers were utilized for training and testing with the help of appropriate characteristics. The accuracy was determined using a retinal proficient; with the aid of database and medical pictures, the verification was performed, and the results achieved beyond 96% sensitivity and accuracy. A three-level hybrid model was introduced for the categorization of digitized retina photographs with HEMs and with no HEMs.53 The observational outcome from the 10-fold cross-validation approach demonstrated that CNN-SVM surpasses CNN-LDA, CNN-NB, and CNN-KNN. The introduced model is quick and exact. The comparative studies of the database, techniques employed for HEM detection, and their performance are described in Table 4. Table 4HEMs detection methods.
The CNN-SVM surpasses CNN-LDA, CNN-NB, and CNN-KNN.53 DL Method in Ref. 92 is cost effective as high computational hardware is not required in contrast to other methods. More works need to be done to increase the efficiency in morphological operation and RF89 method and object detection techniques and machine learning (ML) method.95 Rule-based and ML methods91 detect the individual HEM segmentation, which is not performed in CNN using selective data sampling.90 4.4.Exudate Detection TechniquesEXs are a kind of lipid fundus lesion evident through retinal imaging, varying in color from white to yellow with irregular patterns, size, contrast, and shapes. They are the lesions possessing highest intensity value with fairly diverse margins. Hard EXs are the lipoproteins and some other proteins leaking out of the retinal vessels, which prohibits light from approaching the retina and thus leads to visual impairment. They are frequently irregularly shaped and shiny, discovered nearby to the MAs or at the borders of retinal edema. Soft EXs appear to be in the severe stages of DR. They arise as a consequence of blockage of arteriole. The reduced blood flow to the retina outcomes into ischemia of the retinal nerve fiber layer (RNFL), which ultimately impacts the axoplasmic flow and thereby aggregates axoplasmic debris across the retinal ganglion cell axons. Such collection can be viewed like fluffy white lesions in the RNFL, which are generally referred to as cotton wood spots.6 Several approaches to identify the soft EXs and hard EXs have been reviewed in this section. 4.4.1.Soft exudatesThere is a very limited number of papers in which soft EXs are discussed. Some of them are discussed in this section. In Ref. 97, hard and soft EXs were identified and classified using -means clustering method. At first, the CIELAB colored spaced retinal picture was preprocessed to exclude noise followed by blood vessel network elimination, which simplifies spotting and removal of the optic disc (OD). OD elimination was performed by employing Hough transform method and then by applying -means clustering, EXs are recognized. Finally, the EXs are categorized as hard and soft EXs subject to their edge energy and threshold. Thomas et al. proposed a system,98 where they identified and classified EXs as normal, soft EXs, and hard EXs. The system comprised of two basic stages where initially morphological image processing techniques were employed for identification of EXs that incorporate eradication of OD and later fuzzy logic algorithm was utilized for classification purpose. The fuzzy logic principle exploits RGB colored space values of retinal pictures, for the fuzzy set. In Ref. 99, a successful approach for recognizing the soft/hard EXs from abnormal retinal photographs was introduced. First, the preprocessing was conducted by means of Gaussian filtering, which enhances the input retinal photograph. Then, by exploiting region segmentation, feature extraction, and Levenberg–Marquardt-based neural network classifier, normal/abnormal identification is carried out. After that, soft and hard segmentations are accomplished from abnormal retinal photographs by utilizing fuzzy -means clustering. To perform hierarchical classification into soft or hard EXs from the abnormal retinal photographs, Levenberg–Marquardt-based neural networks were used. Borsos et al.100 introduced a procedure for segmentation of lesion, i.e., hard and soft EXs, which includes three major steps. Initially, the several luminance patterns discovered in retinal images are counterbalanced in preprocessing phase by employing backdrop and forefront pixel extraction and a data normalization operator. Then, to offer analogous superpixels in the image, a modified simple linear iterative clustering (SLIC) algorithm is utilized. Finally, pixel classification based on ANN is performed, employing 15 features taken from the adjacency of the pixels acquired from the equalized images and from the properties of the superpixel where the pixel belongs. Erwin101 presented various techniques for exudate detection including the adaptive threshold method, multithreshold Otsu, top-hat and bottom hat, and fuzzy -means. Among available exudate identification approaches in DR images, the fuzzy -means procedure is the foremost approach for discovering EXs in the STARE database, whereas in DIARETDB1 database, the top hat and bottom hat approaches are the best for discovering EXs. 4.4.2.Hard exudatesThe EXs identification system was introduced by Prentašić and Lončarić,102 employing DCNN. Additionally, DL facilitates architectural landmark identification. The planned technique primarily recognizes the EXs from retinal fundus images. With the purpose of incorporating a great degree of anatomical information about potential exudate locations, the result of the CNN was united with the result of the OD identification and vessel identification process obtaining a sensitivity of 0.78. In Ref. 103, CNN was utilized to identify the EXs in retinal photographs, and an amplified training technique to enhance and increase the speed of CNN training was used. The trained structure has been estimated on a personal annotated database and three free public databases. Abbasi-Sureshjani et al.104 proposed a technique for EXs partition in fundus photographs where the system comprises 20 convolutional layers (9 ResNet blocks), and the results showed that the system accomplishes really good decision making regarding the existence of EXs, which is usually sufficient for clinicians to take measures. The technique could readily be utilized for the identification of another kind of lesion, provided that their manual segmentations were possible. In Ref. 105, deep CNN was introduced to accomplish pixel-wise EX detection where the CNN system was initially trained with proficient marked EXs pictures marks. For the sake of achieving pixel-level ground truth simultaneously decreasing processing time, best EX candidates were initially taken out with the help of morphological opening operations. After that, the local areas () around the candidate points were processed to the trained CNN system for detection. In the study of DR, Kaur and Mittal106 proposed a ruling method for faithful and correct segmentation of EXs where the threshold values were selected vigorously. The inputted data of the proposed technique comprise 1307 fundus images having disparity in color, size, location, and shapes. Observational findings at the lesion level are shown in Table 5. The fragmentation findings for image-based assessment with an average sensitivity of 94.62, the specificity of 98.64, and an accuracy of 96.74 demonstrate the clinical potency of the technique. The proposed system117 used various combinations of characteristics and SVM was used for partitioning of EXs near the macular area of the eye. The system was tested on four databases and achieved an accuracy of 100% employing the DRIVE dataset, and an accuracy of 95% for DIARETDB1 and MESSIDOR, respectively, whereas 94% accuracy for the AFIO dataset was achieved. A DL framework has been proposed116 that identifies hard EXs in retinal pictures by employing the Tensorflow DL algorithm, and the database used was IDRiD. The system accuracy obtained was 96.6% over test photograph patches. Table 5EXs detection methods.
In Ref. 115, the databases used are DiaRetDb1 and DiaRetDb0, and CLAHE technique was used to enhance the RGB image. Later, the green channel was extracted. A median filter with a large kernel was applied to remove the background effect. The resulting image was deducted from the retinal image to enhance the contrast. Global thresholding was found manually to detect the hard EXs. To find OD, the local variance method was used. Decorrelation was done to remove the false candidates and recursive region growing algorithm for identification of EXs. In Ref. 114, the STARE database was used where at first RGB pictures were transformed to gray-scale pictures, and later Gaussian filtering, edge detection, and thresholding is done. Texture feature extraction method was used, which merges both histogram of gradient and GLCM. Using KNN and CNN, classification of segmented areas into abnormal and normal areas was done, and to calculate the NN, the comparison was performed between KNN and CNN. The primary aim of this study113 was to produce a unique technique to identify EXs lesions in color fundus photographs by utilizing a morphology mean shift algorithm (MMSA). The MMA parameters assist an improved accuracy outcome from the uniquely MSA technique by 13.10%. Khojasteh et al.112 employed residual networks (ResNet-50) along with SVM to acquire superior outcomes for the identification of fundus EXs. They studied several CNN, pretrained ResNets, and discriminative RBM and later obtained a greater method with the increased efficiency of EXs detection. In Ref. 111, the databases used were DiaRetDb1, DiaRetDb0, and HRF, and to find EXs, peak intensity value using the histogram is considered as threshold and was used for segmentation. CLAHE method was used for detecting an optic disk. For the two datasets chosen, the sensitivity and specificity were found for gamma values 0.49, 0.5, and 0.51. The best result was obtained for a gamma value of 0.49. The pretrained CNN-based framework had been proposed by Mateen et al.110 for the identification of EXs where at first information preprocessing was carried out for normalization of EXs patches and later transfer learning was conducted for characteristics extraction employing pretrained CNN systems. Additionally, the combined characteristics from fully connected layers were provided to the softmax classifier for the classification of EXs. The observational outcome showed that the presented pretrained CNN-based model surpasses the existing methods for EX detection. Alzami et al.109 proposed the method in multiclass DR detection using EX candidates. EX candidates can be obtained by utilized CLAHE and wiener filter to enhance the fundus images. Then to improve the candidates, region growing, segmentation, and clustering methods, which consider circularity, areas, and eccentricity are utilized. Finally, those candidates were extracted for features using statistical features and fed into an ensemble learning process. The results demonstrated that the method with XGBoost as an ensemble classifier is able to grade the multiclass DR severity level and comparable with other research that uses MESSIDOR datasets. The method presented in Ref. 108 permits the building of a prophetic system for basic and subsidiary hindrance of DME with greater classification ground truth and conception of intensity grading. Datasets used include Indian Diabetic Eye Diseases Dataset (IDEDD), IDRID, and DIARETDB1. ML methods of RF and SVM were utilized for classification. The proposed method107 successfully detects the EXs by nullifying the OD because the OD and EXs are of the same intensity. Image processing and linear regression, an ML technique is involved, which is used to train a machine to differentiate between OD and EXs. Here, the supervised learning technique called linear regression, which could overcome some of the disadvantages that had occurred in SVM, CNN, KNN, and so on, is used. The detailed reviews of the database, techniques employed for EXs detection, and their performance are stated in Table 5. Both the methods image processing and linear regression algorithm107 and a tree-based forward search algorithm108 require less memory but the method in Ref. 118 achieved high accuracy. Morphological operations111 and DIP115 method can improve their performance using ML techniques. A method using DCNN102 and a different mixture of features and SVM117 requires more time contrarily to DCNN105 and image processing and linear regression algorithm.107 The retinal blood vessels identification and segmentation is an important aspect since it helps in identifying the disease affecting the eye that includes glaucoma, hypertension, and DR. MA, caused by the leakage of retinal veins, is the early sign of DR and its identification is a tedious job. Preprocessing of data plays a vital role that helps in removing noise in the retinal images and also helps in enhancing image contrast and quality of fundus image. It is found that most of the scholars have utilized green channel extraction and CLAHE preprocessing techniques as the green color plane maintains the highest contrast and lowest noise to obtain the correct veins in contrast to the red and blue color plane, which are occupied with the background texture. Hence it became tough to identify either the thinner pixels are veins or not in the red and blue color plane. Contrast enhancement performs a vital position in retinal imaging, which is employed to enhance the quality of the image or pulls the small information in degraded images. CLAHE preprocessing method aids in stretching the image contrast and normalizing the gray levels. Even though CLAHE is applicable for enhancing fine details, texture, and local contrast of the images well, it raises the clarity of the major lesion at a price of concurrently establishing small yet false intensity in-homogeneities in the background leading to greater FPs. The existence of in-homogeneities can also misinform the fragmentation computation as to the location of the actual lesion under examination. Because CLAHE removes noise in substitute for raising the in-homogeneities, there is a trade-off among the accuracy of the improved image and the in-homogeneities in the background. 5.Commercially Available AI DR Screening TechnologiesArtificial intelligence (AI) using ML and DL have been endorsed by diverse groups to establish automated DR detection systems. There are several state-of-the art AI DR screening technologies that are present commercially. Recently, two automated AI-based DR screening algorithms acquire United States Food and Drug Administration (FDA) approval. Although more additional algorithms are receiving attention in clinical service in other countries, their actual world functioning has not been estimated consistently. In April 2018, the first ever autonomous AI system IDx-DR, suitable to take diagnostic decisions, received approval from U.S. FDA. IDx-DR is ML algorithm fitted with independent fundus camera susceptible to screen the referable DR and is utilized by prime care medics to determine clients demanding a recommendation to an ophthalmologist for additional supervision.118 A survey in which a screening of 900 patients was performed by IDx-DR device achieved a sensitivity of 87.2% and specificity of 90.7%. The second AI-based DR system named EyeArt received FDA approval in June 2020, which was developed by Eyenuk Inc., based in Los Angeles, USA. In a study with over 100,000 consecutive patient visits, EyeArt reports 91.3% sensitivity and 91.1% specificity for referable DR and 98.5% sensitivity for vision-threatening DR.119 In Ref. 120, the commercially available latest DR screening technologies are summarized, which include IDx-DR, RetmarkerDR, EyeArt, Google, Singapore SERI-NUS, Bosch DR algorithm, and Retinalyze. All these systems were modeled employing a variety of training datasets and proficient approaches. Liu et al.121 presented a brief description of historical and ongoing aspects of AI for DR screening. Furthermore, the detailed performance in developing and validating AI DR algorithms was considered along with regulatory approval and clinical validation and future outlook. Lee et al.122 introduced the one-on-one, multicenter study of comparison of recent AI DR screening where 5 companies presented 7 algorithms among which one was FDA approved and a total of 23,724 patients were confronted. Also the effectiveness of these seven methods was compared contrary to the single teleretinal grader (human). The accuracy outcome differed remarkably between the algorithms where only three out of seven attained comparable sensitivity and one attained comparable specificity to the original teleretinal graders. A detailed analysis and comparison of two leading, commercially available screening systems—IDx-DR and Retinalyze—has been published by Grzybowski and Brona.123 The two screening strategies for Retinalyze were assumed for comparison purpose where from four images either one or two were essential to be labeled positive by the model for altogether positive outcome at the patient level. IDx-DR needs all four images to carry out the screening. The outcomes are per-image and each image is screened individually. The results for DR positive and DR negative cases for IDx-DR were 93.3%, 95.5%; for Retinalyze strategy 1 was 89.7% and 71.8% and for Retinalyze strategy 2 was 74.1% and 93.6%, respectively. 6.Performance Evaluation MetricsIn the area of health informatics, the data utilized in clinical therapy are categorized as with no disease and with the disease, and similar is suitable for DR identification. The correctness124 of a technique is considered by examining the following parameters:
The performance estimation measures employed in the presented research are described in Table 6 to aid the explorers. Table 6Most used performance metrics in studies.
7.ConclusionIn the last few years, several diabetic-related health issues have been rising worldwide. DR, caused by diabetes, may lead to blindness, and for preventing this, early diagnosis is necessary. MAs, HEMs, EXs, etc. are the lesions available in DR. The traditional measure to identify DR involves ophthalmologists for assessment and diagnosing capability, which is time-consuming and costly work. Hence, it became crucial to present efficient DL-based methods. DL has now become an interesting research area and has achieved superb performances in the domain of image processing, especially in DR identification. Innovative and intricate DNN frameworks are being designed to resolve several computerized works. In this review paper, first the collection of retinal datasets is briefly outlined and then DL methods are discussed. After that, the adoption of various approaches has been explored to identify the retinal irregularity that includes retinal blood vessels, HEMs, MAs, and EXs. Then the performance evaluation metrics have been briefly reviewed for automated detection models. In the report, it was examined that almost a scholarly job has been carried out by utilizing CNN models to produced deep multilevel models for the detection of DR employing digital retinal photographs. Advantages of carrying out DL-based methods in DR-screening include reduced reliance on human power, expenses of screening, and concerns related to intra- and intergrader variability. Despite the fact that the significance of DL is rising and various positive outcomes in its research are reaching heights, there remain challenges that need to be addressed. Automated diagnosis of a DR image encounters two major challenges: technical fluctuation in the imaging procedure and patient-to-patient inconstancy in pathological indications of illness.
On the other hand, there requires some understanding prior to the realization of DL about choosing the model, setting the number of convolutional layers, pooling layers, nodes per layers, etc. Additionally, the computer hardware employed in previous studies was not sufficiently proficient and pioneering to treat DR, but currently, they have also become proficient and have achieved a significant state-of-art results. Furthermore, the effectiveness of existing DL systems can be enhanced by merging large-sized DL related systems in a cascade way. Thus the computational expenditure and the training requisite for every DL system get decreased for performing the work separately. A feasible choice to conquer the challenge with DL-based diagnosis would be to include a transportable and affordable fundus imaging device. A portable imaging device would make it simple to offer a point-of-care diagnosis. Although online diagnosing frameworks connect with a central server to categorize retinal photographs, offline diagnosis offers an immediate service that is conducted on a mobile device, beginning from capturing images to demonstrating diagnosis outcome. Offline systems are preferable over internet-based DR diagnosis systems for regions with lack or no accessibility of internet connection. Although portable retinal cameras can be inexpensive than traditional retinal cameras, one drawback they have is the lower quality of fundus photographs they grab. If the quality of images of portable fundus cameras could attain that of traditional cameras, it could direct the way toward offering a cheaper machine-driven DR diagnosis for diabetic persons inhabiting rural regions. In this review paper, we made every effort to include all of the ongoing and existing approaches designed for DR detection using ML and DL methods. From this survey, we discovered that there are an immense number of varieties of methods for DR. Each of the methods has its own benefits and shortcomings. It is certainly tough to determine the all over most effective method as evaluation measures and the computational facilities employed differ from method to method and are very much in accordance with data. For this reason, to adopt a particular method is extremely complex. At the same time, when picking an intelligent DR screening model, higher sensitivity and specificity metrics are essential elements. Taking into account both pros and cons in addition to the high throughput of DL-based methods, automated DR classification employing DL could be viable in an actual screening framework. In future work, researchers should pay attention to upgradation of camera systems for premature diagnosis retinopathy. Potency of the current techniques is in ambiguity. To further have greater accuracy, hybridization of algorithms may be impressive. Moreover, research may emphasize advancing innovative schemes toward conquering the shortcomings of current state-of-the-art technology. This review paper discusses and makes a comparative analysis of the databases used, performance metrics employed, and ML and DL techniques adopted recently in DR detection, along with challenges and future scopes, which can be taken up by the researchers in near future. The current work reviewed 122 research articles in total from which 63 are employed for DR feature-based detection and segmentation techniques. From the studies involved in the current work, 40% of them employed single public datasets, and 60% of them employed two or more public datasets to surpass the issue of data size and to examine the DL methods on several datasets as shown in Fig. 5. 67% of the current studies identified the DR lesions while 33% identified the segmented vessel structures as shown in Fig. 6. ReferencesT. Y. Wong and C. Sabanayagam,
“Strategies to tackle the global burden of diabetic retinopathy: from epidemiology to artificial intelligence,”
Ophthalmologica, 243
(1), 9
–20
(2020). https://doi.org/10.1159/000502387 OPHTAD 0030-3755 Google Scholar
R. L. Thomas et al.,
“IDF Diabetes Atlas: a review of studies utilising retinal photography on the global prevalence of diabetes related retinopathy between 2015 and 2018,”
Diabetes Res. Clin. Pract., 157 107840
(2019). https://doi.org/10.1016/j.diabres.2019.107840 DRCPE9 0168-8227 Google Scholar
P. Saeedi et al.,
“Global and regional diabetes prevalence estimates for 2019 and projections for 2030 and 2045: results from the International Diabetes Federation Diabetes Atlas,”
Diabetes Res. Clin. Pract., 157 107843
(2019). https://doi.org/10.1016/j.diabres.2019.107843 DRCPE9 0168-8227 Google Scholar
J. P. Nayak et al.,
“Automated identification of diabetic retinopathy stages using digital fundus images,”
J. Med. Syst., 32
(2), 107
–115
(2008). https://doi.org/10.1007/s10916-007-9113-9 JMSYDA 0148-5598 Google Scholar
O. Faust et al.,
“Algorithms for the automated detection of diabetic retinopathy using digital fundus images: a review,”
J. Med. Syst., 36
(1), 145
–157
(2012). https://doi.org/10.1007/s10916-010-9454-7 Google Scholar
R. F. Mansour,
“Evolutionary computing enriched computer-aided diagnosis system for diabetic retinopathy: a survey,”
IEEE Rev. Biomed. Eng, 10 334
–349
(2017). https://doi.org/10.1109/RBME.2017.2705064 Google Scholar
R. Shalini and S. Sasikala,
“A survey on detection of diabetic retinopathy,”
in 2nd Int. Conf. I-SMAC (IoT in Social, Mob., Anal. and Cloud)(I-SMAC) I-SMAC (IoT in Soc. Mob., Anal. and Cloud)(I-SMAC),
626
–630
(2018). https://doi.org/10.1109/I-SMAC.2018.8653694 Google Scholar
A. Ahmad et al.,
“Image processing and classification in diabetic retinopathy: a review,”
in 5th Eur. Workshop Vis. Inf. Process. (EUVIP),
1
–6
(2014). https://doi.org/10.1109/EUVIP.2014.7018362 Google Scholar
S. Roychowdhury, D. D. Koozekanani and K. K. Parhi,
“DREAM: diabetic retinopathy analysis using machine learning,”
IEEE J. Biomed. Health Inf., 18
(5), 1717
–1728
(2013). https://doi.org/10.1109/JBHI.2013.2294635 Google Scholar
M. Manjramkar,
“Survey of diabetic retinopathy screening methods,”
in 2nd Int. Conf. Trends in Electron. and Inf. (ICOEI),
1
–6
(2018). https://doi.org/10.1109/ICOEI.2018.8553843 Google Scholar
H. Thanati et al.,
“On deep learning based algorithms for detection of diabetic retinopathy,”
in Int. Conf. Electron. Inf. and Commun. (ICEIC),
1
–7
(2019). https://doi.org/10.23919/ELINFOCOM.2019.8706431 Google Scholar
N. Lunscher et al.,
“Automated screening for diabetic retinopathy using compact deep networks,”
J. Comput. Vision Imaging Syst., 3
(1),
(2017). Google Scholar
T. Kauppi et al.,
“DIARETDB0: evaluation database and methodology for diabetic retinopathy algorithms,”
Mach. Vision Pattern Recognit. Res. Group, Lappeenranta Univ. Technol. Finland, 73 1
–17
(2006). Google Scholar
T. Kauppi et al.,
“The diaretdb1 diabetic retinopathy database and evaluation protocol,”
in BMVC,
1
–10
(2007). Google Scholar
E. Decencière et al.,
“Feedback on a publicly distributed image database: the Messidor database,”
Image Anal. Stereol., 33
(3), 231
–234
(2014). https://doi.org/10.5566/ias.1155 Google Scholar
E. Decencière et al.,
“TeleOphta: machine learning and image processing methods for teleophthalmology,”
IRBM, 34
(2), 196
–203
(2013). https://doi.org/10.1016/j.irbm.2013.01.010 Google Scholar
J. J. Staal et al.,
“DRIVE: digital retinal images for vessel extraction,”
IEEE Trans. Med. Imaging, 23
(4), 501
–509
(2004). Google Scholar
D. Kaba et al.,
“Retinal blood vessels extraction using probabilistic modelling,”
Health Inf. Sci. Syst., 2
(1), 2
(2014). https://doi.org/10.1186/2047-2501-2-2 Google Scholar
B. McCormick and M. Goldbaum,
“STARE: structured analysis of the retina: image processing of TV fundus image,”
in USA–Jpn. Workshop Image Process.,
(1975). Google Scholar
A. Budai et al.,
“Robust vessel segmentation in fundus images,”
Int. J. Biomed. Imaging, 2013 154860
(2013). https://doi.org/10.1155/2013/154860 Google Scholar
X. Lu et al.,
“A coarse-to-fine fully convolutional neural network for fundus vessel segmentation,”
Symmetry, 10
(11), 607
(2018). https://doi.org/10.3390/sym10110607 SYMMAM 2073-8994 Google Scholar
M. Niemeijer et al.,
“Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs,”
IEEE Trans. Med. Imaging, 29
(1), 185
–195
(2009). https://doi.org/10.1109/TMI.2009.2033909 ITMID4 0278-0062 Google Scholar
T. Li et al.,
“Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening,”
Inf. Sci., 501 511
–522
(2019). https://doi.org/10.1016/j.ins.2019.06.011 Google Scholar
R. Pires et al.,
“Advancing bag-of-visual-words representations for lesion classification in retinal images,”
PLoS ONE, 9
(2016). https://doi.org/10.6084/m9.figshare.953671.v3 Google Scholar
P. Porwal et al.,
“Indian Diabetic Retinopathy Image Dataset (IDRID),”
IEEE Dataport,
(2018). https://doi.org/10.21227/H25W98 Google Scholar
X.-W. Chen and X. Lin,
“Big data deep learning: challenges and perspectives,”
IEEE Access, 2 514
–525
(2014). https://doi.org/10.1109/ACCESS.2014.2325029 Google Scholar
M. Ayhan et al.,
“Expert-validated estimation of diagnostic uncertainty for deep neural networks in diabetic retinopathy detection,”
Med. Image Anal., 64 101724
(2020). https://doi.org/10.1016/j.media.2020.101724 Google Scholar
Y. Guo et al.,
“Deep learning for visual understanding: a review,”
Neurocomputing, 187 27
–48
(2016). https://doi.org/10.1016/j.neucom.2015.09.116 NRCGEO 0925-2312 Google Scholar
Z. Wu et al.,
“Coarse-to-fine classification for diabetic retinopathy grading using convolutional neural network,”
Artif. Intell. Med., 108 101936
(2020). https://doi.org/10.1016/j.artmed.2020.101936 AIMEEW 0933-3657 Google Scholar
B. Zou et al.,
“Deep learning and its application in diabetic retinopathy screening,”
Chin. J. Electron., 29
(6), 992
–1000
(2020). https://doi.org/10.1049/cje.2020.09.001 CHJEEW 1022-4653 Google Scholar
N. Shivsharan and S. Ganorkar,
“Diabetic retinopathy detection using optimization assisted deep learning model: outlook on improved Grey Wolf algorithm,”
Int. J. Image Graphics, 21 2150035
(2021). https://doi.org/10.1142/S0219467821500352 Google Scholar
A. Singh and P. Kaushik,
“Five-stage classification of diabetic retinopathy with denoising autoencoder & deep convolution neural network,”
Google Scholar
S. K. Ghosh, B. Biswas and A. Ghosh,
“SDCA: a novel stack deep convolutional autoencoder–an application on retinal image denoising,”
IET Image Process., 13
(14), 2778
–2789
(2019). https://doi.org/10.1049/iet-ipr.2018.6582 Google Scholar
K. Kannadasan, D. R. Edla and V. Kuppili,
“Type 2 diabetes data classification using stacked autoencoders in deep neural networks,”
Clin. Epidemiol. Global Health, 7
(4), 530
–535
(2019). https://doi.org/10.1016/j.cegh.2018.12.004 Google Scholar
B. Biswas, S. K. Ghosh and A. Ghosh,
“DVAE: deep variational auto-encoders for denoising retinal fundus image,”
Hybrid Machine Intelligence for Medical Image Analysis, 257
–273 Springer, Singapore
(2020). Google Scholar
R. Adarsh et al.,
“Dense residual convolutional auto encoder for retinal blood vessels segmentation,”
in 6th Int. Conf. Adv. Comput. and Commun. Syst. (ICACCS),
280
–284
(2020). https://doi.org/10.1109/ICACCS48705.2020.9074172 Google Scholar
A. Ortiz et al.,
“Retinal blood vessel segmentation by multi-channel deep convolutional autoencoder,”
in 13th Int. Conf. Soft Comput. Models in Ind. and Environ. Appl.,
37
–46
(2018). https://doi.org/10.1007/978-3-319-94120-2_4 Google Scholar
S. S. Basha and K. V. Ramanaiah,
“Algorithmic analysis of distance-based monarch butterfly oriented deep belief network for diabetic retinopathy,”
in 5th Int. Conf. Signal Process. Comput. and Control (ISPCC),
226
–234
(2019). https://doi.org/10.1109/ISPCC48220.2019.8988486 Google Scholar
A. A. Tehrani et al.,
“Multi-input 2-dimensional deep belief network: diabetic retinopathy grading as case study,”
Multimedia Tools Appl., 80 6171
–6186
(2020). https://doi.org/10.1007/s11042-020-10025-1 Google Scholar
M. F. Syahputra et al.,
“Diabetic retinopathy identification using deep believe network,”
J. Phys.: Conf. Ser., 1235
(1), 012103
(2019). https://doi.org/10.1088/1742-6596/1235/1/012103 JPCSDZ 1742-6588 Google Scholar
N. Theera-Umpon et al.,
“Hard exudate detection in retinal fundus images using supervised learning,”
Neural Comput, Appl, 32 13079
–13096
(2019). https://doi.org/10.1007/s00521-019-04402-7 Google Scholar
S. Chakraborty et al.,
“An improved method using supervised learning technique for diabetic retinopathy detection,”
Int. J. Inf. Technol., 12 473
–477
(2019). https://doi.org/10.1007/s41870-019-00318-6 Google Scholar
B. R. Kiran, D. M. Thomas and R. Parakkal,
“An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos,”
J. Imaging, 4
(2), 36
(2018). https://doi.org/10.3390/jimaging4020036 Google Scholar
T. Chandrakumar and R. Kathirvel,
“Classifying diabetic retinopathy using deep learning architecture,”
Int. J. Eng. Res. Technol., 5
(6), 19
–24
(2016). Google Scholar
R. Patel and S. Patel,
“A comprehensive study of applying convolutional neural network for computer vision,”
Int. J. Adv. Sci. Technol., 29 2161
–2174
(2020). Google Scholar
S. Gayathri et al.,
“A lightweight CNN for diabetic retinopathy classification from fundus images,”
Biomed. Signal Process. Control, 62 102115
(2020). https://doi.org/10.1016/j.bspc.2020.102115 Google Scholar
M. Shaban et al.,
“A convolutional neural network for the screening and staging of diabetic retinopathy,”
PLoS One, 15
(6), e0233514
(2020). https://doi.org/10.1371/journal.pone.0233514 POLNCL 1932-6203 Google Scholar
L. A. N. Muhammed and S. H. Toman,
“Diabetic retinopathy diagnosis based on convolutional neural network,”
(2020). Google Scholar
A. Samanta et al.,
“Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset,”
Pattern Recognit. Lett., 135 293
–298
(2020). https://doi.org/10.1016/j.patrec.2020.04.026 PRLEDG 0167-8655 Google Scholar
M. Chetoui and M. A. Akhloufi,
“Explainable diabetic retinopathy using EfficientNET,”
in 42nd Annu. Int. Conf. IEEE Eng. Med. & Biol. Soc. (EMBC),
19661969
(2020). https://doi.org/10.1109/EMBC44109.2020.9175664 Google Scholar
D. Zhang, W. Bu and X. Wu,
“Diabetic retinopathy classification using deeply supervised ResNet,”
in IEEE SmartWorld, Ubiquitous Intell. & Comput. Adv. & Trust. Comput. Scalable Comput. & Commun. Cloud & Big Data Comput. Internet of People and Smart City Innov. (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI),
16
(2017). https://doi.org/10.1109/UIC-ATC.2017.8397469 Google Scholar
S. Lahmiri,
“Hybrid deep learning convolutional neural networks and optimal nonlinear support vector machine to detect presence of hemorrhage in retina,”
Biomed. Signal Process. Control, 60 101978
(2020). https://doi.org/10.1016/j.bspc.2020.101978 Google Scholar
S. Seth and B. Agarwal,
“A hybrid deep learning model for detecting diabetic retinopathy,”
J. Stat. Manage. Syst., 21
(4), 569
–574
(2018). https://doi.org/10.1080/09720510.2018.1466965 Google Scholar
D. Maji et al.,
“Deep neural network and random forest hybrid architecture for learning to detect retinal vessels in fundus images,”
in 37th Annu. Int. Conf. IEEE Eng. Med. and Biol. Soc. (EMBC),
3029
–3032
(2015). https://doi.org/10.1109/EMBC.2015.7319030 Google Scholar
H. Fu et al.,
“Deep vessel: retinal vessel segmentation via deep learning and conditional random field,”
Lect. Notes Comput. Sci., 9901 132
–139
(2016). https://doi.org/10.1007/978-3-319-46723-8_16 LNCSD9 0302-9743 Google Scholar
P. Liskowski and K. Krawiec,
“Segmenting retinal blood vessels with deep neural networks,”
IEEE Trans. Med. Imaging, 35
(11), 2369
–2380
(2016). https://doi.org/10.1109/TMI.2016.2546227 ITMID4 0278-0062 Google Scholar
B. Zhang, S. Huang and S. Hu,
“Multi-scale neural networks for retinal blood vessels segmentation,”
(2018). Google Scholar
K. Hu et al.,
“Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function,”
Neurocomputing, 309 179
–191
(2018). https://doi.org/10.1016/j.neucom.2018.05.011 NRCGEO 0925-2312 Google Scholar
P. M. Samuel and T. Veeramalai,
“Multilevel and multiscale deep neural network for retinal blood vessel segmentation,”
Symmetry, 11
(7), 946
(2019). https://doi.org/10.3390/sym11070946 SYMMAM 2073-8994 Google Scholar
A. Oliveira, S. Pereira and C. A. Silva,
“Retinal vessel segmentation based on fully convolutional neural networks,”
Expert Syst. Appl., 112 229
–242
(2018). https://doi.org/10.1016/j.eswa.2018.06.034 Google Scholar
Y. Lin, H. Zhang and G. Hu,
“Automatic retinal vessel segmentation via deeply supervised and smoothly regularized network,”
IEEE Access, 7 57717
–57724
(2018). https://doi.org/10.1109/ACCESS.2018.2844861 Google Scholar
C. Wang et al.,
“Dense U-net based on patch-based learning for retinal vessel segmentation,”
Entropy, 21
(2), 168
(2019). https://doi.org/10.3390/e21020168 ENTRFG 1099-4300 Google Scholar
C. Guo et al.,
“SD-Unet: a structured dropout u-net for retinal vessel segmentation,”
in IEEE 19th Int. Conf. Bioinf. and Bioeng. (BIBE),
439
–444
(2019). https://doi.org/10.1109/BIBE.2019.00085 Google Scholar
C. Guo et al.,
“SA-UNet: spatial attention u-net for retinal vessel segmentation,”
in 25th Int. Conf. Pattern Recognit.,
1236
–1242
(2021). Google Scholar
I. Atli and O. S. Gedik,
“Sine-Net: a fully convolutional deep learning architecture for retinal blood vessel segmentation,”
Eng. Sci. Technol. Int. J., 24 271
–283
(2020). https://doi.org/10.1016/j.jestch.2020.07.008 Google Scholar
N. Tamim et al.,
“Retinal blood vessel segmentation using hybrid features and multi-layer perceptron neural networks,”
Symmetry, 12
(6), 894
(2020). https://doi.org/10.3390/sym12060894 SYMMAM 2073-8994 Google Scholar
P. M. Samuel and T. Veeramalai,
“VSSC net: vessel specific skip chain convolutional network for blood vessel segmentation,”
Comput. Methods Prog. Biomed., 198 105769
(2020). https://doi.org/10.1016/j.cmpb.2020.105769 Google Scholar
Y. Wu et al.,
“NFN+: a novel network followed network for retinal vessel segmentation,”
Neural Networks, 126 153
–162
(2020). https://doi.org/10.1016/j.neunet.2020.02.018 NNETEB 0893-6080 Google Scholar
Ü. Budak et al.,
“DCCMED-net: densely connected and concatenated multi encoder-decoder CNNs for retinal vessel extraction from fundus images,”
Med. Hypoth., 134 109426
(2020). https://doi.org/10.1016/j.mehy.2019.109426 MEHYDY 0306-9877 Google Scholar
C. Tian et al.,
“Multi-path convolutional neural network in fundus segmentation of blood vessels,”
Biocybern. Biomed. Eng., 40
(2), 583
–595
(2020). https://doi.org/10.1016/j.bbe.2020.01.011 Google Scholar
Q. Jin et al.,
“DUNet: a deformable network for retinal vessel segmentation,”
Knowl.-Based Syst., 178 149
–162
(2019). https://doi.org/10.1016/j.knosys.2019.04.025 Google Scholar
S. Guo et al.,
“BTS-DSN: deeply supervised neural network with short connections for retinal vessel segmentation,”
Int. J. Med. Inf., 126 105
–113
(2019). https://doi.org/10.1016/j.ijmedinf.2019.03.015 Google Scholar
C.-H. Hua, T. Huynh-The and S. Lee,
“Retinal vessel segmentation using round-wise features aggregation on bracket-shaped convolutional neural networks,”
in 41st Annu. Int. Conf. IEEE Eng. in Med. and Biol. Soc. (EMBC),
36
–39
(2019). https://doi.org/10.1109/EMBC.2019.8856552 Google Scholar
J. Son, S. J. Park and K.-H. Jung,
“Towards accurate segmentation of retinal vessels and the optic disc in fundoscopic images with generative adversarial networks,”
J. Digital Imaging, 32
(3), 499
–512
(2019). https://doi.org/10.1007/s10278-018-0126-3 Google Scholar
M. M. Habib et al.,
“Microaneurysm detection in retinal images using an ensemble classifier,”
in Sixth Int. Conf. Image Process. Theory, Tools and Appl. (IPTA),
1
–6
(2016). https://doi.org/10.1109/IPTA.2016.7820998 Google Scholar
S. Sreng, N. Maneerat and K. Hamamoto,
“Automated microaneurysms detection in fundus images using image segmentation,”
in Int. Conf. Digital Arts, Media and Technol. (ICDAMT),
19
–23
(2017). https://doi.org/10.1109/ICDAMT.2017.7904926 Google Scholar
S. Kumar and B. Kumar,
“Diabetic retinopathy detection by extracting area and number of microaneurysm from colour fundus image,”
in 5th Int. Conf. Signal Process. and Integr. Networks (SPIN),
359
–364
(2018). https://doi.org/10.1109/SPIN.2018.8474264 Google Scholar
J. Xu et al.,
“Automatic analysis of microaneurysms turnover to diagnose the progression of diabetic retinopathy,”
IEEE Access, 6 9632
–9642
(2018). https://doi.org/10.1109/ACCESS.2018.2808160 Google Scholar
L. Dai et al.,
“Clinical report guided retinal microaneurysm detection with multi-sieving deep learning,”
IEEE Trans. Med. Imaging, 37
(5), 1149
–1161
(2018). https://doi.org/10.1109/TMI.2018.2794988 ITMID4 0278-0062 Google Scholar
W. Cao et al.,
“Microaneurysm detection using principal component analysis and machine learning methods,”
IEEE Trans. Nanobiosci., 17
(3), 191
–198
(2018). https://doi.org/10.1109/TNB.2018.2840084 Google Scholar
Y. Hatanaka et al.,
“Automatic microaneurysms detection on retinal images using deep convolution neural network,”
in Int. Workshop Adv. Image Technol. (IWAIT),
1
–2
(2018). https://doi.org/10.1109/IWAIT.2018.8369794 Google Scholar
S. Gupta et al.,
“Classification of lesions in retinal fundus images for diabetic retinopathy using transfer learning,”
in Int. Conf. Inf. Technol. (ICIT),
342
–347
(2019). https://doi.org/10.1109/ICIT48102.2019.00067 Google Scholar
N. Eftekhari et al.,
“Microaneurysm detection in fundus images using a two-step convolutional neural network,”
Biomed. Eng. Online, 18
(1), 67
(2019). https://doi.org/10.1186/s12938-019-0675-9 Google Scholar
L. Qiao, Y. Zhu and H. Zhou,
“Diabetic retinopathy detection using prognosis of microaneurysm and early diagnosis system for non-proliferative diabetic retinopathy based on deep learning algorithms,”
IEEE Access, 8 104292
–104302
(2020). https://doi.org/10.1109/ACCESS.2020.2993937 Google Scholar
S. Long et al.,
“Microaneurysms detection in color fundus images using machine learning based on directional local contrast,”
Biomed. Eng. Online, 19 1
–23
(2020). https://doi.org/10.1186/s12938-020-00766-3 Google Scholar
D. S. David and A. A. Jose,
“Retinal microaneurysms detection for diabetic retinopathy screening in fundus imagery,”
Artech. J. Eff. Res. Eng. Technol., 1 69
–75
(2020). Google Scholar
N. Mazlan et al.,
“Automated microaneurysms detection and classification using multilevel thresholding and multilayer perceptron,”
J. Med. Biol. Eng., 40 292
–306
(2020). https://doi.org/10.1007/s40846-020-00509-8 IYSEAK 0021-3292 Google Scholar
N. Kaur et al.,
“A supervised approach for automated detection of hemorrhages in retinal fundus images,”
in 5th Int. Conf. Wireless Networks and Embed. Syst. (WECON),
1
–5
(2016). https://doi.org/10.1109/WECON.2016.7993461 Google Scholar
M. J. J. P. Van Grinsven et al.,
“Fast convolutional neural network training using selective data sampling: application to hemorrhage detection in color fundus images,”
IEEE Trans. Med. Imaging, 35
(5), 1273
–1284
(2016). https://doi.org/10.1109/TMI.2016.2526689 ITMID4 0278-0062 Google Scholar
D. Xiao et al.,
“Retinal hemorrhage detection by rule-based and machine learning approach,”
in 39th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC),
660
–663
(2017). https://doi.org/10.1109/EMBC.2017.8036911 Google Scholar
R. Gargeya and T. Leng,
“Automated identification of diabetic retinopathy using deep learning,”
Ophthalmology, 124
(7), 962
–969
(2017). https://doi.org/10.1016/j.ophtha.2017.02.008 OPANEW 0743-751X Google Scholar
L. Godlin Atlas and K. Parasuraman,
“Detection of retinal hemorrhage from fundus images using ANFIS classifier and MRG segmentation,”
Biomed. Res., 29
(7), 1489
–1497
(2018). https://doi.org/10.4066/BIOMEDICALRESEARCH.29-18-281 Google Scholar
R. Murugan,
“An automatic detection of hemorrhages in retinal fundus images by motion pattern generation,”
Biomed. Pharmacol. J., 12
(3), 1433
–1440
(2019). https://doi.org/10.13005/bpj/1772 Google Scholar
N. Rani et al.,
“Hemorrhage segmentation and detection in retinal images using object detection techniques and machine learning perspectives,”
in Global Conf. Adv. Technol. (GCAT),
1
–5
(2019). https://doi.org/10.1109/GCAT47503.2019.8978422 Google Scholar
K. A. Sreeja and S. S. Kumar,
“Automated detection of retinal hemorrhage based on supervised classifiers,”
Ind. J. Electr. Eng. Inf., 8
(1), 140
–148
(2020). https://doi.org/10.1007/978-981-15-5224-3_6 Google Scholar
G. G. Rajput and P. N. Patil,
“Detection and classification of exudates using k-means clustering in color retinal images,”
in Fifth Int. Conf. Signal Image Process.,
126
–130
(2014). https://doi.org/10.1109/ICSIP.2014.25 Google Scholar
N. Thomas, T. Mahesh and K. Shunmuganathan,
“Detection and classification of exudates in diabetic retinopathy,”
Int. J. Adv. Res. Comput. Sci. Manage. Stud., 2
(9), 296
–305
(2014). Google Scholar
M. Kavitha and S. Palani,
“Hierarchical classifier for soft and hard exudates detection of retinal fundus images,”
J. Intell. Fuzzy Syst., 27
(5), 2511
–2528
(2014). https://doi.org/10.3233/IFS-141224 JIFSE2 1064-1246 Google Scholar
B. Borsos et al.,
“Automatic detection of hard and soft exudates from retinal fundus images,”
Acta Univ. Sapient. Inf., 11
(1), 65
–79
(2019). https://doi.org/10.2478/ausi-2019-0005 Google Scholar
E. Erwin,
“Techniques for exudate detection for diabetic retinopathy,”
in Int. Conf. Inf. Multimedia, Cyber and Inf. Syst. (ICIMCIS),
(2020). https://doi.org/10.1109/ICIMCIS48181.2019.8985226 Google Scholar
P. Prentašić and S. Lončarić,
“Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion,”
Comput. Methods Prog. Biomed., 137 281
–292
(2016). https://doi.org/10.1016/j.cmpb.2016.09.018 Google Scholar
G. Li, S. Zheng and X. Li,
“Exudate detection in fundus images via convolutional neural network,”
International Forum on Digital TV and Wireless Multimedia Communications, 193
–202 Springer, Singapore
(2017). Google Scholar
S. Abbasi-Sureshjani et al.,
“Boosted exudate segmentation in retinal images using residual nets,”
Fetal, Infant and Ophthalmic Medical Image Analysis, 210
–218 Springer, Cham
(2017). Google Scholar
S. Yu, D. Xiao and Y. Kanagasingam,
“Exudate detection for diabetic retinopathy with convolutional neural networks,”
in 39th Annu. Int. Conf. IEEE Eng. Med. and Biol. Soc. (EMBC),
1744
–1747
(2017). https://doi.org/10.1109/EMBC.2017.8037180 Google Scholar
J. Kaur and D. Mittal,
“A generalized method for the segmentation of exudates from pathological retinal fundus images,”
Biocybern. Biomed. Eng., 38
(1), 27
–53
(2018). https://doi.org/10.1016/j.bbe.2017.10.003 Google Scholar
Center, V. T. U. P. G.,
“Exudates detection in fundus image using image processing and linear regression algorithm,”
Google Scholar
D. Thulkar, R. Daruwala and N. Sardar,
“An integrated system for detection exudates and severity quantification for diabetic macular edema,”
J. Med. Biol. Eng., 40 798
–820
(2020). https://doi.org/10.1007/s40846-020-00561-4 IYSEAK 0021-3292 Google Scholar
F. Alzami et al.,
“Exudates detection for multiclass diabetic retinopathy grade detection using ensemble,”
Technology Reports of Kansai University, 62
(3),
(2020). Google Scholar
M. Mateen et al.,
“Exudate detection for diabetic retinopathy using pretrained convolutional neural networks,”
Complexity, 2020 5801870
(2020). https://doi.org/10.1155/2020/5801870 COMPFS 1076-2787 Google Scholar
D. Lokuarachchi et al.,
“Automated detection of exudates in retinal images,”
in IEEE 15th Int. Colloq. Signal Process. Its Appl. (CSPA),
43
–47
(2019). https://doi.org/10.1109/CSPA.2019.8696052 Google Scholar
P. Khojasteh et al.,
“Exudate detection in fundus images using deeply-learnable features,”
Comput. Boil. Med., 104 62
–69
(2019). https://doi.org/10.1016/j.compbiomed.2018.10.031 Google Scholar
K. Wisaeng and W. Sa-Ngiamvibool,
“Exudates detection using morphology mean shift algorithm in retinal images,”
IEEE Access, 7 11946
–11958
(2019). https://doi.org/10.1109/ACCESS.2018.2890426 Google Scholar
G. J. Anitha and K. G. Maria,
“Detecting hard exudates in retinal fundus images using convolutinal neural networks,”
in Int. Conf. Curr. Trends Towards Converg. Technolo. (ICCTCT),
1
–5
(2018). https://doi.org/10.1109/ICCTCT.2018.8551079 Google Scholar
K. K. Palavalasa and B. Sambaturu,
“Automatic diabetic retinopathy detection using digital image processing,”
in Int. Conf. Commun. and Signal Process. (ICCSP),
0072
–0076
(2018). https://doi.org/10.1109/ICCSP.2018.8524234 Google Scholar
A. Benzamin and C. Chakraborty,
“Detection of hard exudates in retinal fundus images using deep learning,”
in Joint 7th Int. Conf. Inf. Electron. & Vision (ICIEV) and 2nd Int. Conf. Imaging, Vision & Pattern Recognit. (icIVPR),
465
–469
(2018). https://doi.org/10.1109/ICIEV.2018.8641016 Google Scholar
A. M. Syed et al.,
“Robust detection of exudates using fundus images,”
in IEEE 21st Int. Multi-Top. Conf. (INMIC),
1
–5
(2018). https://doi.org/10.1109/INMIC.2018.8595642 Google Scholar
G. W. Armstrong and A. C. Lorch,
“A (eye): a review of current applications of artificial intelligence and machine learning in ophthalmology,”
Int. Ophthalmol. Clin., 60
(1), 57
–71
(2020). https://doi.org/10.1097/IIO.0000000000000298 IOPCAV 0020-8167 Google Scholar
M. Bhaskaranand et al.,
“The value of automated diabetic retinopathy screening with the EyeArt system: a study of more than 100,000 consecutive encounters from people with diabetes,”
Diabetes Technol. Therapeut., 21
(11), 635
–643
(2019). https://doi.org/10.1089/dia.2019.0164 Google Scholar
A. Grzybowski et al.,
“Artificial intelligence for diabetic retinopathy screening: a review,”
Eye, 34
(3), 451
–460
(2020). https://doi.org/10.1038/s41433-019-0566-0 12ZYAS 0950-222X Google Scholar
Y. Liu et al.,
“Lessons learnt from harnessing deep learning for real-world clinical applications in ophthalmology: detecting diabetic retinopathy from retinal fundus photographs,”
Artificial Intelligence in Medicine, 247
–264 Academic Press(2021). Google Scholar
A. Y. Lee et al.,
“Multicenter, head-to-head, real-world validation study of seven automated artificial intelligence diabetic retinopathy screening systems,”
Diabetes Care, 44
(5), 1168
–1175
(2021). https://doi.org/10.2337/dc20-1877 DICAD2 0149-5992 Google Scholar
A. Grzybowski and P. Brona,
“Analysis and comparison of two artificial intelligence diabetic retinopathy screening algorithms in a pilot study: IDx-DR and RetinaLyze,”
J. Clin. Med., 10
(11), 2352
(2021). https://doi.org/10.3390/jcm10112352 Google Scholar
M. Mateen et al.,
“Automatic detection of diabetic retinopathy: a review on datasets, methods and evaluation metrics,”
IEEE Access, 8 48784
–48811
(2020). https://doi.org/10.1109/ACCESS.2020.2980055 Google Scholar
BiographyShreya Shekar completed her Bachelor of Engineering degree from Prof. Ram Meghe Institute of Technology and Research, Badnera, Maharashtra, India, in 2018. She is currently pursuing her Master of Technology degree in signal processing at the College of Engineering Pune, Maharashtra, India. Her research interest includes machine learning and deep learning. Nitin Satpute received his Master of Engineering degree in embedded systems from BITS, Pilani, Rajasthan, India, in June 2013. He has worked at VNIT, Nagpur, India; University of Siena, Siena, Italy; and IISc, Bangalore, India. He is a researcher at the University of Cordoba, Cordoba, Spain. He has attended (a) MIT GSW at Novotel Hyderabad Convention Center (March 2016); (b) deep learning training program presented by Deep Learning Institute, NVIDIA and hosted by GPU Center of Excellence, IIT Bombay, on December 5, 2016; (c) the 2015 LOFAR Surveys Meeting, held at Leiden, Netherlands (September 2015); and (d) the AXIOM Face to Face Meet, held at BSC, Spain (June 2015). Aditya Gupta received his MTech degree from the College of Engineering Pune in 2014 and completed his MTech dissertation on “Face recognition in real-time video” under the guidance of Prof. Madhuri A. Joshi. He completed his PhD in 2019 under the guidance of Prof. Kishore D. Kulat at Visvesvaraya National Institute of Technology, Nagpur, Maharashtra, India. His PhD thesis title was “Smart water management techniques for leakage control in water distribution system.” He is currently working as an adjunct faculty member at the College of Engineering Pune, Pune, India. |