For the first time, the US Food and Drug Administration has approved an artificial intelligence diagnostic device that doesn’t need a specialized doctor to interpret the results. The software program, called IDx-DR, can detect a form of eye disease by looking at photos of the retina.
It works like this: A nurse or doctor uploads photos of the patient’s retina taken with a special retinal camera. The IDx-DR software algorithm first indicates whether the image uploaded is high-quality enough to get a result. Then, it analyzes the images to determine whether the patient does or does not have diabetic retinopathy, a form of eye disease where too much blood sugar damages the blood vessels in the back of the eye. Diabetic retinopathy is the most common vision complication for people with diabetes, but is still fairly rare — there are about 200,00 cases per year.
In one clinical trial that used more than 900 images, IDx-DR correctly detected retinopathy about 87 percent of the time, and could correctly identify those who didn’t have the disease about 90 percent of the time.
The software is unique because it’s autonomous and there’s “not a specialist looking over the shoulder of [this] algorithm,” IDx-DR founder Michael Abràmoff told Science News. “It makes the clinical decision on its own.” This means that the technology can be used by a nurse or doctor who’s not an eye specialist, making diagnosis more accessible. For example, patients wouldn’t need to wait for an eye specialist to be available to get a diagnosis.
IDx-DR is part of a growing trend of algorithms learning how to spot and diagnose disease. Earlier this year, scientists trained a different algorithm to learn how to recognize conditions including age-related vision loss and diabetic retinopathy. Google, too, is training its DeepMind AI to spot eye disease. Now that the FDA has cleared IDx-DR, it might lead the way to a new slew of autonomous diagnostic tests and the trade-offs they bring. These diagnoses could be more convenient for patients (and possibly even more accurate than doctors). But of course, not having a specialist “looking over the shoulder,” as Abràmoff puts it, raises the question of who will be responsible when the diagnosis is wrong.