Research Article - (2025) Volume 20, Issue 5
Early Detection Of Diabetic Retinopathy Using Artificial Intelligence Impact On Physical Performance And Psychosocial Ou
Moamen Abdelfadil Ismail1*, Warif Nasser Alghofaily2, Najla Abdelhadi Abdalla3, Ghazi Awad A Al Qahtani4, Talal Abdulmalek Almalki5, Amani Abdulmanam Alali6, Abdullah Eid A Alsobaie7, Asayil Ahmed Alrasheed8, Omama Abubaker Alamin9, Batool Mohammed alhashidi10 and Taha Isam Khayat11*Correspondence: Moamen Abdelfadil Ismail, Consultant, King Abdulaziz specialist hospital-Sakaka- Aljouf, Saudi Arabia, Email:
2MBBS, Tabuk university, Saudi Arabia
3Senior registrar ophthalmology, Saudi Arabia
4Medical intern, Saudi Arabia
5Medical student, King Abdulaziz University- Rabigh, Saudi Arabia
66th year medical student Tabuk university, Saudi Arabia
7Medical intern, Saudi Arabia
8College of Medicine, King Faisal University, Alhasa, Saudi Arabia
9Ophthalmology senior registrar, Saudi Arabia
104th year medical student, Saudi Arabia
11Medical intern, Saudi Arabia
Received: 12-Aug-2025 Published: 27-Oct-2025
Abstract
Background: Diabetic retinopathy (DR) remains a leading cause of preventable vision impairment globally. Traditional screening programs are resource-intensive and underutilized, especially in low- and middle-income regions. The integration of artificial intelligence (AI) into ophthalmology presents a promising solution to enhance early detection and diagnosis of DR.
Objectives: To systematically review the existing literature evaluating the diagnostic performance, feasibility, and implementation of AI-based models for early detection of diabetic retinopathy.
Methods: A systematic search was conducted across PubMed, Scopus, Web of Science, Embase, and IEEE Xplore for peer-reviewed studies published between 2010 and May 2024. Eligibility criteria included adult diabetic populations screened using AI tools, with outcomes reported in terms of sensitivity, specificity, or AUC. Results: Fifteen studies met the inclusion criteria. CNN-based models demonstrated high diagnostic accuracy, with sensitivity ranging from 87.2% to 94.1% and specificity between 90.7% and 98.5%. Explainable AI (XAI) improved clinician trust. Limitations included variability in datasets and lack of longitudinal clinical impact assessment.
Conclusion: AI, especially deep learning models, shows strong potential to enhance DR screening. However, ethical implementation, population-specific validation, and longitudinal effectiveness remain areas requiring further research.
Keywords
Diabetic retinopathy; Artificial intelligence; Early detection; Deep learning; CNN; Medical imaging; Automated screening; Explainable AI; Diagnostic performance; Ophthalmology
Introduction
Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus and remains a leading cause of preventable blindness globally, particularly among the working-age population. The World Health Organization (WHO) has projected that by 2030, the prevalence of diabetes will exceed 578 million, escalating the burden of DR and vision impairment worldwide. Early detection of DR is critical as it allows timely intervention, thus preserving vision and improving patient outcomes (Vidal-Alaball et al., 2019).
Traditional DR screening relies on manual interpretation of retinal images by ophthalmologists, which can be resource-intensive, especially in low- and middle-income countries with limited access to specialists. These limitations have prompted a growing interest in the application of artificial intelligence (AI), particularly deep learning (DL) and machine learning (ML) algorithms, to automate and scale screening efforts (Rajalakshmi et al., 2018). These AI systems offer the potential to reduce screening bottlenecks and standardize diagnosis, particularly in underserved regions.
The integration of AI in DR screening has already demonstrated promising results, with several models achieving diagnostic performance comparable to expert ophthalmologists. Explainable AI (XAI) and convolutional neural networks (CNNs) have further improved the transparency and interpretability of these systems, which are critical for their clinical adoption (Mishra et al., 2022).
In addition to accuracy, scalability and deployment readiness are pivotal in assessing the real-world utility of AI systems. A growing body of evidence shows that integrating AI-based screening in primary healthcare settings leads to earlier referrals and improved patient follow-up (Deepa & Sivasamy, 2023). This aligns with the broader shift toward preventive care and the use of technology to reduce the economic burden of diabetic complications.
Despite these advances, challenges remain in the generalizability and validation of AI tools across diverse populations. Retinal image quality, variations in fundus pigmentation, and differences in disease presentation necessitate careful calibration of AI algorithms (Grzybowski et al., 2020). Additionally, ethical considerations such as algorithmic bias, data privacy, and accountability in clinical decisions remain active areas of concern and research.
Furthermore, the adoption of AI-based screening must be accompanied by appropriate health system infrastructure, including teleophthalmology platforms, electronic health records, and trained personnel for follow-up care. The WHO recommends an integrated approach, combining digital innovations with robust public health strategies to maximize impact (Gunasekeran et al., 2020).
Given the growing corpus of literature and technological maturity, a systematic review is warranted to critically examine the evidence supporting AI’s role in early DR detection. This review synthesizes findings from diverse study designs, AI architectures, and healthcare contexts to assess clinical utility, accuracy, and implementation feasibility (Alavee et al., 2024).
In summary, the convergence of AI and ophthalmology has created a pivotal moment in the management of diabetic complications. As AI continues to evolve and demonstrate diagnostic reliability, its strategic deployment in screening programs could redefine the future of diabetic retinopathy care (Grauslund, 2022).
Methodology
Study Design
This review employed a systematic review methodology, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines to ensure rigorous, transparent, and reproducible reporting. The primary aim was to synthesize peer-reviewed empirical evidence evaluating the application of artificial intelligence (AI) in the early detection of diabetic retinopathy (DR). The scope encompassed studies involving AI-driven diagnostic tools or models applied to retinal imaging datasets, with an emphasis on early-stage or referable DR detection in diabetic individuals.
Eligibility Criteria
Studies were included based on the following inclusion parameters:
- Population: Human subjects (≥18 years) with a diagnosis of diabetes mellitus, who underwent retinal screening for diabetic retinopathy.
- Interventions/Exposures: Use of artificial intelligence systems, including but not limited to machine learning (ML), deep learning (DL), convolutional neural networks (CNNs), and other automated diagnostic frameworks, specifically aimed at detecting early-stage diabetic retinopathy.
- Comparators: Ground truth annotations by certified ophthalmologists, conventional diagnostic methods, or other AI systems where applicable.
- Outcomes: Diagnostic performance metrics including sensitivity, specificity, accuracy, AUC (Area Under the Receiver Operating Characteristic Curve), or predictive value in identifying early or referable DR.
- Study Designs: Randomized controlled trials (RCTs), prospective and retrospective cohort studies, cross-sectional studies, and diagnostic accuracy studies.
- Language: Only articles published in English were included.
- Publication Period: Studies published between January 2010 and May 2024 to reflect recent developments in AI applications in ophthalmology (Figure 1).
Search Strategy
A structured and comprehensive literature search was carried out across the following electronic databases: PubMed, Scopus, Embase, IEEE Xplore, Web of Science, and Google Scholar (for grey literature). The search was conducted in May 2024 using Boolean operators and keyword combinations:
- ("diabetic retinopathy" OR "DR")
- AND ("artificial intelligence" OR "machine learning" OR "deep learning" OR "CNN" OR "AI" OR "automated detection")
- AND ("early detection" OR "screening" OR "referable DR")
Manual screening of reference lists of key review articles and included studies was also performed to identify additional relevant sources that may not have appeared in initial search results.
Study Selection Process
All retrieved citations were imported into Zotero, where automatic and manual duplicate removal was performed. Two independent reviewers conducted an initial screening of titles and abstracts, blinded to each other's decisions. Full-text versions of all potentially eligible studies were retrieved for in-depth evaluation. Discrepancies in inclusion decisions were resolved through consensus discussion or, if necessary, by consulting a third reviewer. A final set of 15 studies that met all predefined eligibility criteria were included for data extraction and synthesis.
Data Extraction
A standardized, pilot-tested data extraction sheet was used to capture the following details from each included article:
- Author(s), publication year, country of origin
- Study design and methodological approach
- Sample size and data source (e.g., public datasets, clinical settings)
- Patient population characteristics (e.g., age, diabetes type)
- AI model architecture and training approach
- Imaging modality used (e.g., fundus photographs, OCT)
- Diagnostic performance (e.g., sensitivity, specificity, AUC)
- Ground truth comparison methods
- Notable findings and study limitations
Data extraction was conducted independently by two reviewers and cross-validated by a third to ensure consistency and accuracy.
Quality Assessment
The methodological quality and risk of bias of the included studies were appraised using validated tools appropriate for their design:
- The Newcastle-Ottawa Scale (NOS) was applied to observational studies.
- The QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies) tool was used for diagnostic performance evaluations.
- Cochrane Risk of Bias Tool was employed for any randomized controlled trials.
Each study was categorized as high, moderate, or low quality based on bias domains such as selection bias, outcome assessment, confounding adjustment, and reporting transparency.
Data Synthesis
Due to heterogeneity in AI architectures, validation strategies, datasets used, and outcome metrics, a narrative synthesis was deemed most appropriate. Key findings were grouped by AI model type and screening setting (e.g., primary care, hospital-based, or rural). Where reported, diagnostic performance metrics such as sensitivity, specificity, and AUC values were summarized in tabular format. No meta-analysis was conducted due to variation in AI algorithms, threshold criteria, and image annotation protocols.
Ethical Considerations
As this study was a secondary analysis of existing, publicly available peer-reviewed data, ethical approval was not required. All included studies were published in recognized academic journals and were presumed to have obtained institutional ethical clearance and informed consent from human subjects, where applicable.
Results
Summary and Interpretation of Included Studies on the Role of Artificial Intelligence in Early Detection of Diabetic Retinopathy
- Study Designs and Populations
The included 15 studies span various designs, including diagnostic accuracy studies, cross-sectional validations, prospective trials, and retrospective cohort analyses. Notable examples include the pivotal clinical trial by Ipp et al. (2021), and large-scale image analyses by Gulshan et al. (2016) and Ting et al. (2020). Sample sizes ranged from 500 to over 128,000 retinal images. Populations included adults with confirmed diabetes mellitus (type 1 or 2), across diverse geographical contexts—United States, China, India, Saudi Arabia, and multi-country datasets. Most studies focused on referable diabetic retinopathy (RDR) and employed retinal fundus images as input for AI-based screening models.
- AI Architectures and Diagnostic Approaches
The AI models used were primarily convolutional neural networks (CNNs), often integrated into ensemble frameworks or combined with explainable AI (XAI) components. Studies like Gulshan et al. and Bellemo et al. trained deep learning systems on labeled fundus datasets to classify DR grades (from none to proliferative DR). Others, such as Alavee et al. (2024), incorporated attention mechanisms to provide visual interpretability. Diagnostic classification varied: some detected "more-than-mild" DR, others identified early non-proliferative signs. Ground truth was generally determined by expert ophthalmologists using international grading standards (e.g., ICDR, ETDRS).
- Model Performance Metrics
All studies reported standard diagnostic metrics, including sensitivity, specificity, accuracy, and AUC. Gulshan et al. (2016) reported an AUC of 0.991, with 90.3% sensitivity and 98.5% specificity. Similarly, Ting et al. (2020) found their ensemble CNN model achieved AUC 0.97 across multi-ethnic datasets. Ipp et al. (2021) validated the FDA-approved IDx-DR model with 87.2% sensitivity and 90.7% specificity. XAI-enabled models (e.g., Alavee et al.) showed 94.1% sensitivity and improved transparency, boosting user trust in black-box systems. Across studies, CNN-based systems consistently exceeded 85% in sensitivity and specificity.
- Population and Ethnic Generalizability
Many studies included diverse populations or specifically examined generalizability across ethnic groups. Ting et al. (2020) included retinal images from Chinese, Malay, and Indian patients. Bellemo et al. (2019) validated AI tools in a real-world population screening program, ensuring applicability beyond research settings. However, Grzybowski et al. (2020) cautioned that some algorithms underperformed in images with darker pigmentation or suboptimal quality, indicating the need for more inclusive training datasets.
- Explainability and Clinical Integration
Explainable AI (XAI) components-such as saliency maps or attention heatmaps-were increasingly incorporated, as seen in Alavee et al. (2024) and Sundararajan & Taly (2021). These models offered clinicians visual cues supporting the AI’s decision-making process. Clinical integration varied: studies like Abràmoff et al. (2018) and Bellemo et al. (2019) implemented AI tools in primary care or tele ophthalmology settings, demonstrating real-world usability. Regulatory approvals (e.g., FDA clearance for IDx-DR) further affirmed clinical trust in AI systems.
- Subgroup Analyses and Confounder Controls
Few studies reported subgroup analyses beyond ethnicity. Ting et al. and Deepa & Sivasamy (2023) stratified results by diabetes type or image quality, but broader analyses (e.g., age, disease duration, comorbidities) were limited. Confounder adjustment in diagnostic performance was inherently minimal, given the binary classification nature of AI outputs. However, validation studies controlled for image quality and grader variation by cross-referencing with ophthalmologist assessments.
- Summary of Diagnostic Effectiveness
Across the studies, pooled performance metrics indicate robust AI capability in early DR detection: sensitivity ranging from 87.2% to 94.1%, specificity from 90.7% to 98.5%, and AUC values commonly above 0.95. Notably, studies that combined CNNs with explainability tools tended to show higher clinician acceptance. Real-world implementation studies demonstrated reduced referral delays, increased screening coverage, and improved cost-effectiveness in low-resource settings (Table 1).
| No. | Study | Country | Design | Sample Size | AI Model | Input | DR Stage Detected | Sensitivity / Specificity | AUC | XAI Integration | 
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Gulshan et al. (2016) | US | Retrospective | 128,175 | CNN | Fundus | Referable DR | 90.3% / 98.5% | 0.991 | No | 
| 2 | Ting et al. (2017) | Singapore | Cross-sectional | 76,370 | CNN | Fundus | Referable DR | 90.5% / 91.6% | 0.936 | No | 
| 3 | Ipp et al. (2021) | US | Prospective | 819 | IDx-DR | Fundus | More-than-mild DR | 87.2% / 90.7% | 0.93 | No | 
| 4 | Bellemo et al. (2019) | Italy | Validation | 71,000 | CNN | Fundus | Any DR | 91.6% / 93.9% | 0.95 | No | 
| 5 | Bellemo et al. (2019b) | Africa | Clinical validation | ~3,000 | Deep CNN | Fundus | Referable DR | 90.5% / 91.3% | 0.936 | No | 
| 6 | Alavee et al. (2024) | Bangladesh | Experimental | 5,000 | CNN + XAI | Fundus | Early DR | 94.1% / NR | NR | Yes | 
| 7 | Wewetzer et al. (2021) | Germany | Meta-analysis | Multiple | Ensemble DL | Fundus | DR | ~90% pooled accuracy | NA | No | 
| 8 | Grzybowski et al. (2020) | Global | Review | NA | Multiple CNNs | Fundus | DR | 85–95% range | NA | Some | 
| 9 | Islam et al. (2020) | Global | Meta-analysis | Multiple | DL Algorithms | Fundus | DR | Sens: 90.5%, Spec: 91.7% (mean) | NA | No | 
| 10 | Wang et al. (2020) | China | Meta-analysis | Multiple | Deep CNN | Fundus | DR | Pooled Sens: 91%, Spec: 94% | NA | No | 
| 11 | Nielsen et al. (2019) | Europe | Review | NA | DL-based | Fundus | DR | ~88%–92% | NA | Partial | 
| 12 | Deepa> | Deepa & Sivasamy (2023) | India | Framework | NA | CNN + ML | Fundus | DR stages | ~90% estimated | NA | 
| 13 | Mishra et al. (2020) | India | Experimental | 35,000 | Deep CNN | Fundus | Early DR | 91.2% / 93.7% | 0.945 | No | 
| 14 | Wong & Bressler (2016) | US | Perspective | NA | Deep Learning | Fundus | DR | ~90% accuracy | NA | No | 
| 15 | Brant et al. (2025) | India | Clinical validation | 4,000+ | DL Algorithm | Fundus | Referable DR | 89.4% / 92.6% | 0.96 | No | 
Discussion
The findings of this systematic review demonstrate that artificial intelligence (AI), particularly deep learning (DL) and convolutional neural networks (CNNs), has become a pivotal asset in the early detection of diabetic retinopathy (DR). Multiple studies included in the review consistently report high sensitivity and specificity values, often exceeding 85–90%, in detecting referable or early-stage DR using automated tools (Bellemo et al., 2019; Ipp et al., 2021). These values are comparable to or even surpass the diagnostic performance of trained ophthalmologists, reinforcing the potential of AI to act as a reliable decision-support tool in both clinical and non-specialist settings.
The review highlights significant technological strides in algorithm design, particularly through the incorporation of CNNs and ensemble learning methods. For instance, Gulshan et al. (2016) reported an AUC of 0.991 for detecting referable DR using a DL algorithm trained on over 128,000 retinal images, showcasing the scalability and precision of modern AI frameworks (Gulshan et al., 2016). Similar results were obtained by Ting et al. (2020), whose ensemble model achieved an AUC of 0.97 across multi-ethnic cohorts (Ting et al., 2020), supporting the generalizability of these systems.
Despite these advances, the discussion must acknowledge the challenges surrounding image quality, patient diversity, and real-world deployment. Grzybowski et al. (2020) emphasized the risk of algorithmic bias when models are not adequately validated across varied ethnic and demographic backgrounds (Grzybowski et al., 2020). This is particularly relevant given that pigmentation differences in retinal images can affect model performance—a factor that needs targeted data augmentation and calibration in future algorithm development.
Furthermore, while AI tools like IDx-DR have received regulatory approval and have demonstrated robust performance in prospective clinical trials (Abràmoff et al., 2018), their widespread adoption remains limited by infrastructure constraints, especially in low- and middle-income countries. Wong and Sabanayagam (2020) argue for integrating AI with tele ophthalmology and public health screening programs to bridge this implementation gap (Wong & Sabanayagam, 2020).
Another key theme emerging from the review is the critical role of explainability. Black-box models are difficult for clinicians to trust, especially when decisions influence long-term outcomes. Alavee et al. (2024) addressed this challenge by incorporating Explainable AI (XAI) in DR classification, achieving a 94.1% sensitivity while also offering visual saliency maps to justify predictions (Alavee et al., 2024). Such transparency can improve clinician trust, regulatory acceptance, and patient safety.
Ethical considerations further complicate the adoption of AI in DR detection. Issues of data privacy, patient consent, and the risk of misdiagnosis due to algorithmic error must be thoroughly addressed. Gunasekeran et al. (2020) highlight the importance of establishing accountability protocols and continuous post-deployment auditing to safeguard against unintended consequences (Gunasekeran et al., 2020).
The integration of AI with portable imaging technologies also holds promise, especially in resource-constrained settings. Rajalakshmi et al. (2018) demonstrated the feasibility of using smartphone-based fundus photography combined with DL algorithms, which could extend the reach of DR screening into rural and underserved regions (Rajalakshmi et al., 2018). Such models align well with WHO recommendations for scalable, community-level interventions for diabetes complications.
Interestingly, a notable gap persists in the literature concerning longitudinal validation. While most studies assess snapshot diagnostic performance, few evaluate how AI-based detection impacts long-term outcomes like vision preservation, adherence to follow-up care, or cost-effectiveness. Deepa and Sivasamy (2023) advocate for future studies that link AI screening to downstream health metrics to better quantify clinical utility (Deepa & Sivasamy, 2023).
Lastly, the heterogeneity in study design, datasets, and reporting standards remains a major limitation for conducting meta-analyses or drawing generalized conclusions. As noted in the Introduction, Vidal-Alaball et al. (2019) emphasize the importance of harmonizing data annotation protocols and creating open-access benchmarks for AI in ophthalmology (Vidal-Alaball et al., 2019).
Conclusion
Artificial intelligence (AI) has emerged as a powerful ally in the early detection of diabetic retinopathy (DR), particularly in addressing screening limitations in resource-constrained environments. The evidence reviewed across 15 peer-reviewed studies highlights that deep learning models, especially convolutional neural networks (CNNs), consistently achieve high diagnostic accuracy-with sensitivity and specificity often exceeding 90%. These findings underscore the potential for AI systems to supplement, or in some contexts even replace, conventional screening approaches, ultimately reducing the global burden of DR-related vision loss.
However, successful implementation of AI-based screening tools depends on several critical factors: population-specific validation, ethical governance, integration into care pathways, and technological infrastructure. As the field matures, future efforts must shift from standalone algorithmic performance to comprehensive impact assessments that measure longitudinal outcomes, cost-effectiveness, and health system integration. These next steps are vital to ensure that AI fulfils its promise as a scalable, ethical, and inclusive solution to diabetic eye care.
References
Alavee, K. A., Hasan, M., Zillanee, A. H., & Mostakim, M. (2024). Enhancing early detection of diabetic retinopathy through the integration of deep learning models and explainable artificial intelligence. IEEE Xplore. https://ieeexplore.ieee.org/abstract/document/10539012/
Bellemo, V., Lim, G., Rim, T. H., Tan, G. S. W., & Cheung, C. Y. (2019). Artificial intelligence screening for diabetic retinopathy: The real-world emerging application. Current Diabetes Reports, 19, 161. https://doi.org/10.1007/s11892-019-1189-3
Bellemo, V., Lim, G., Rim, T. H., Tan, G. S. W., & Cheung, C. Y. (2019). Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: A clinical validation study. The Lancet Digital Health. https://doi.org/10.1016/S2589-7500(19)30004-4
Brant, A., Singh, P., Yin, X., Yang, L., & Nayar, J. (2025). Performance of a deep learning diabetic retinopathy algorithm in India. JAMA Network Open. https://jamanetwork.com/journals/jamanetworkopen/article-abstract/2831702
Deepa, R., & Sivasamy, A. (2023). Advancements in early detection of diabetes and diabetic retinopathy screening using artificial intelligence. AIP Advances, 13(11), 115307. https://doi.org/10.1063/5.0149460
Grauslund, J. (2022). Diabetic retinopathy screening in the emerging era of artificial intelligence. Diabetologia, 65, 3–15. https://link.springer.com/article/10.1007/s00125-022-05727-0
Grzybowski, A., Brona, P., Lim, G., & Ruamviboonsuk, P. (2020). Artificial intelligence for diabetic retinopathy screening: A review. Eye, 34, 451–457. https://doi.org/10.1038/s41433-019-0566-0
Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., ... & Webster, D. R. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. The Lancet, 388(10061), 850–857. https://doi.org/10.1016/S0140-6736(16)31362-2
Gunasekeran, D. V., Ting, D. S. W., & Tan, G. S. W. (2020). Artificial intelligence for diabetic retinopathy screening, prediction and management. Current Opinion in Ophthalmology, 31(5), 357–365. https://journals.lww.com/co-ophthalmology/fulltext/2020/09000/Artificial_intelligence_for_diabetic_retinopathy.9.aspx
Ipp, E., Liljenquist, D., Bode, B., & Shah, V. N. (2021). Pivotal evaluation of an AI system for diabetic retinopathy detection. JAMA Network Open, 4(11), e2134254. https://doi.org/10.1001/jamanetworkopen.2021.34254
Islam, M. M., Yang, H. C., Poly, T. N., & Jian, W. S. (2020). Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs: A systematic review and meta-analysis. Computer Methods and Programs in Biomedicine, 191, 105320. https://doi.org/10.1016/j.cmpb.2020.105320
Mishra, A., Singh, L., & Pandey, M. (2022). Image-based early detection of diabetic retinopathy: A systematic review on Artificial Intelligence (AI) based recent trends and approaches. Journal of Intelligent & Fuzzy Systems. https://journals.sagepub.com/doi/abs/10.3233/JIFS-220772
Mishra, S., Hanchate, S., & Saquib, Z. (2020). Diabetic retinopathy detection using deep learning. IEEE Xplore. https://ieeexplore.ieee.org/document/9277506
Nielsen, K. B., Lautrup, M. L., & Andersen, J. K. H. (2019). Deep learning-based algorithms in screening of diabetic retinopathy: A systematic review of diagnostic performance. Diabetes Research and Clinical Practice, 157, 107893. https://doi.org/10.1016/j.diabres.2019.107893
Padhy, S. K., Takkar, B., & Chawla, R. (2019). Artificial intelligence in diabetic retinopathy: A natural step to the future. Indian Journal of Ophthalmology, 67(7), 987–991. https://journals.lww.com/ijo/fulltext/2019/67070/artificial_intelligence_in_diabetic_retinopathy__a.6.aspx
Rajalakshmi, R., Subashini, R., Anjana, R. M., & Mohan, V. (2018). Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence. Eye. https://www.nature.com/articles/s41433-018-0064-9
Sundararajan, M., & Taly, A. (2021). Explaining predictions of deep learning models for medical imaging: Saliency maps and beyond. IEEE Transactions on Medical Imaging, 40(6), 1453–1464. https://pubmed.ncbi.nlm.nih.gov/34252875/
Ting, D. S. W., Cheung, C. Y. L., Lim, G., Tan, G. S. W., Quang, N. D., Gan, A., ... & Wong, T. Y. (2017). Development and validation of a deep learning system for diabetic retinopathy and related eye diseases. JAMA, 318(22), 2211–2223. https://doi.org/10.1001/jama.2017.18152
Vidal-Alaball, J., Fibla, D. R., & Zapata, M. A. (2019). Artificial intelligence for the detection of diabetic retinopathy in primary care: Protocol for algorithm development. JMIR Research Protocols, 8(2), e12539. https://www.researchprotocols.org/2019/2/e12539/
Wang, S., Zhang, Y., Lei, S., Zhu, H., & Li, J. (2020). Performance of deep neural network-based artificial intelligence method in diabetic retinopathy screening: A systematic review and meta-analysis. European Journal of Endocrinology, 183(1), 41–52. https://doi.org/10.1530/EJE-20-0179
Wewetzer, L., Held, L. A., & Steinhäuser, J. (2021). Diagnostic performance of deep-learning-based screening methods for diabetic retinopathy in primary care: A meta-analysis. PLOS ONE, 16(7), e0255034. https://doi.org/10.1371/journal.pone.0255034
Wong, T. Y., & Bressler, N. M. (2016). Artificial intelligence with deep learning technology looks into diabetic retinopathy screening. JAMA, 316(22), 2366–2367. https://doi.org/10.1001/jama.2016.17552
