Image databases of skin conditions are notoriously biased towards lighter skin. Rather than wait for the slow process of collecting more images of conditions like cancer or inflammation on darker skin, one group wants to fill in the gaps using artificial intelligence. It’s working on an AI program to generate synthetic images of diseases on darker skin — and using those images for a tool that could help diagnose skin cancer.
“Having real images of darker skin is the ultimate solution,” says Eman Rezk, a machine learning expert at McMaster University in Canada working on the project. “Until we have that data, we need to find a way to close the gap.”
But other experts working in the field worry that using synthetic images could introduce their own problems. The focus should be on adding more diverse real images to existing databases, says Roxana Daneshjou, a clinical scholar in dermatology at Stanford University. “Creating synthetic data sounds like an easier route than doing the hard work to create a diverse data set,” she says.
There are dozens of efforts to use AI in dermatology. Researchers build tools that can scan images of rashes and moles to figure out the most likely type of issue. Dermatologists can then use the results to help them make diagnoses. But most tools are built on databases of images that either don’t include many examples of conditions on darker skin or don’t have good information about the range of skin tones they include. That makes it hard for groups to be confident that a tool will be as accurate on darker skin.
That’s why Rezk and the team turned to synthetic images. The project has four main phases. The team already analyzed available image sets to understand how underrepresented darker skin tones were to begin with. It also developed an AI program that used images of skin conditions on lighter skin to produce images of those conditions on dark skin and validated the images that the model gave them. “Thanks to the advances in AI and deep learning, we were able to use the available white scan images to generate high-quality synthetic images with different skin tones,” Rezk says.
Next, the team will combine the synthetic images of darker skin with real images of lighter skin to create a program that can detect skin cancer. It will continuously check image databases to find any new, real pictures of skin conditions on darker skin that they can add to the future model, Rezk says.
The team isn’t the first to create synthetic skin images — a group that included Google Health researchers published a paper in 2019 describing a method to generate them, and it could create images of varying skin tones. (Google is interested in dermatology AI and announced a tool that can identify skin conditions last spring.)
Rezk says synthetic images are a stopgap until there are more real pictures of conditions on darker skin available. Daneshjou, though, worries about using synthetic images at all, even as a temporary solution. Research teams would have to carefully check if AI-generated images would have any usual quirks that people wouldn’t be able to see with the naked eye. That type of quirk could theoretically skew results from an AI program. The only way to confirm that the synthetic images work as well as real images in a model would be to compare them with real images — which are in short supply. “Then goes back to the fact of, well, why not just work on trying to get more real images?” she says.
If a diagnostic model is based on synthetic images from one group and real images from another — even temporarily — that’s a concern, Daneshjou says. It could lead to the model performing differently on different skin tones.
Leaning on synthetic data could also make people less likely to push for real, diverse images, she says. “If you’re going to do that, are you actually going to keep doing the work? she says. “I would actually like to see more people do work on getting real data that is diverse, rather than trying to do this workaround.”