Researchers at Radboud University Medical Center in The Netherlands have developed a new AI algorithm for detecting diabetic retinopathy and age-related macular degeneration. The algorithm’s iterative approach identifies greater lesions than non-iterative approaches, and boosts specificity by approximately 10%, with diagnostic accuracy of about 80%. The algorithm annotates images to depict which lesions it finds concerning, helping guide physician decision-making.
Machine learning systems have been developed to help spot a variety of cancers, detect eye lesions, and design biomolecules. Yet, a major critique of these by physicians is that they operate as black boxes, arriving to conclusions without an explanation of how. An explanation is especially important when certain nuanced features and observations are needed to come to a strong medical conclusion. To address this issue, the Radboud researchers developed a new method for identifying lesions in the eye and forming a more comprehensive annotation of retinal diseases.
The algorithm works as follows: a fundus photograph of the back of the eye is taken. The algorithm processes the image to suggest whether the case should be referred to a specialist or not. Then, the algorithm identifies concerning retinal lesions, and these lesions are filled in with a technique called selective inpainting to resemble healthy retinal tissue. The modified image is fed into the algorithm again, which re-scores it, identifies further lesions that it finds concerning, and repeats the identification and inpainting process until the retinal image is scored as healthy.
In this way, more moderate or smaller lesions which may not have been identified in the first iteration can be identified in further iterations, providing physicians with a more comprehensive picture of suspected eye lesions. The technology, being software, should be easy to make widely available once it is perfected and receives regulatory approvals.
The publication in IEEE Transactions on Medical Imaging: Iterative augmentation of visual evidence for weakly-supervised lesion localization in deep interpretability frameworks: application to color fundus images