The disease’s associated foot ulcers can lead to amputation of the limb while diabetic retinopathy (DR) can rob people of their sight. Some 415 million diabetics worldwide are at risk of this visual affliction and many of those living with it in the developing world lack sufficient health care access to treat it.
That’s why Google is training its deep learning AI to spot DR before it becomes a problem, and without the help of an on-site doctor.
Since the disease is most readily diagnosed by examining a picture of the back of the eye, the Google team has spent the past few years developing a dataset of 128,000 individual images, each examined by 3-7 ophthalmologists from a panel of 54.
By marking damaged areas of the eye, microaneurysms, hemorrhages and the like, and then feeding that data into a machine learning system, Google managed to build a highly reliable diagnostic tool. When tested with 12,000 images, the system’s diagnosis was "on-par with that of ophthalmologists" according to the Google Research Blog post.
The team hopes to expand the scope of this system to be able to diagnose the disease from more complex 3D images (those generated from Optical Coherence Tomography) in addition to the conventional 2D fundus photographs that it currently uses.
The team is also looking into automating the diagnostic process to better serve patients in remote locations who might otherwise not have access to trained specialists. But first, Google will need to conduct studies using larger clinical groups and, eventually, obtain FDA approval.