YouTube has sent a cease and desist letter to Clearview AI demanding that the controversial facial recognition startup stop scraping YouTube videos to gather faces for its database and delete any images it’s already collected. The demand, first reported by CBS News, says that YouTube forbids anyone from collecting data from its platform that can be used to identify people and that Clearview has admitted to violating this policy.
“YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person,” a YouTube spokesperson said in a statement sent to The Verge. “Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter.”
Clearview AI came into the public spotlight last month after The New York Times revealed what the company was building: an app that can find images of a person’s face across the web. Clearview’s product is marketed to law enforcement, and police have apparently been using it by uploading an image of a suspect’s face, finding where else they’ve appeared online, and then using that information to locate their name.
The service was built by collecting images from across the web, taking them from Facebook, Instagram, Twitter, news sites, and more, according to the report. YouTube was supposedly among the services being scraped for images.
Clearview may be in a legally precarious position since it likely does not have explicit permission to use many, if not the vast majority, of the photos its service relies on. Still, it’s not clear how enforceable YouTube’s terms of service are in this case. Twitter sent a cease and desist to Clearview last month also demanding that it delete existing photos it had stored. The Verge has reached out to Clearview for comment.
In addition to collecting photos, Clearview allegedly uses them to train AI to identify people. There remain open questions around the extent to which photos and videos can be used to train AI without permission, but it’s something that continues to be done. YouTube has even supported AI research this way — albeit for much less starting purposes. Last year, Google revealed how the company had used YouTube videos of the Mannequin Challenge (where people pretended to be frozen in place) to train a tool to predict depth in videos.