Gomez C, Wang R, Breininger K, Casey C, Bradley C, Pavlak M, Pham A, Yohannan J, Unberath M (2025)
Publication Type: Journal article
Publication year: 2025
Book Volume: 8
Article Number: 706
Journal Issue: 1
DOI: 10.1038/s41746-025-02069-0
Primary eye care providers refer glaucoma patients using their clinical experience and context. Specialized Artificial Intelligence (AI) excels in referrals trained on clinical data but relies on assumptions that may not hold in practice. To address this knowledge imbalance, we proposed using AI explanations to help providers assess AI predictions against their experience and improve referrals. We developed AI models to identify patients needing urgent referrals from routine eye care data and created intrinsic and post-hoc explanations. In a user study with 87 optometrists, human-AI teams achieved higher accuracy (60%) than humans alone (51%), but explanations did not enhance performance. Instead, they introduced uncertainty about when to trust AI. Post-hoc explanations increased over-reliance on incorrect AI recommendations, and both explanation types contributed to anchoring bias, making participants align more closely with AI referrals than without explanations. Overall, human-AI teams still underperformed the AI alone (80% accuracy). Challenges persist in designing effective support mechanisms to surpass AI performance while preserving human agency.
APA:
Gomez, C., Wang, R., Breininger, K., Casey, C., Bradley, C., Pavlak, M.,... Unberath, M. (2025). The explainable AI dilemma under knowledge imbalance in specialist AI for glaucoma referrals in primary care. npj Digital Medicine, 8(1). https://doi.org/10.1038/s41746-025-02069-0
MLA:
Gomez, Catalina, et al. "The explainable AI dilemma under knowledge imbalance in specialist AI for glaucoma referrals in primary care." npj Digital Medicine 8.1 (2025).
BibTeX: Download