Skip to main content

Clearview AI’s source code and app data exposed in cybersecurity lapse

Clearview AI’s source code and app data exposed in cybersecurity lapse

/

Company claims only law enforcement agencies have access to its software

Share this story

Facial Recognition
Illustration by James Bareham / The Verge

A security lapse at controversial facial recognition startup Clearview AI meant that its source code, some of its secret keys and cloud storage credentials, and even copies of its apps were publicly accessible. TechCrunch reports that an exposed server was discovered by Mossab Hussein, Chief Security Officer at cybersecurity firm SpiderSilk, who found that it was configured to allow anyone to register as a new user and log in.

Clearview AI first made headlines back in January, when a New York Times exposé detailed its massive facial recognition database, which consists of billions of images scraped from websites and social media platforms. Users upload a picture of a person of interest, and Clearview AI’s software will attempt to match it with any similar images in its database, potentially revealing a person’s identity from a single image.

Its Mac, Windows, iOS, and Android apps were exposed

Since its work became public, Clearview AI has defended itself by saying that its software is only available to law enforcement agencies (although reports claim that Clearview has been marketing its system to private businesses including Macy’s and Best Buy). Poor cybersecurity practices like these, however, could allow this powerful tool to fall into the wrong hands outside of the company’s client list.

According to TechCrunch, the server contained the source code to the company’s facial recognition database, as well as secret keys and credentials that allowed access to some of its cloud storage containing copies of its Windows, Mac, Android, and iOS apps. Hussein was able to take screenshots of the company’s iOS app, which Apple recently blocked for violating its rules. The company’s Slack tokens were also accessible, which could have allowed access to the company’s private internal communications.

Hussein was able to access the service’s iOS app and take screenshots.
Hussein was able to access the service’s iOS app and take screenshots.
Source: TechCrunch

Hussein also said he found around 70,000 videos in the company’s cloud storage taken from a camera installed in a residential building. Clearview AI’s founder Hoan Ton-That told TechCrunch that the footage had been captured with the permission of the building’s management as part of attempts to prototype a security camera. The building itself is reportedly located in Manhattan, but TechCrunch notes that the real estate firm in charge of the building did not return requests for comment.

Responding to the cybersecurity lapse, Ton-That said that it “did not expose any personally identifiable information, search history, or biometric identifiers” and added that the company has “done a full forensic audit of the host to confirm no other unauthorized access occurred,” which suggests that Hussein was the only one to access the misconfigured server. The secret keys exposed by the server have also been changed so they no longer work.

Clearview AI’s system has faced fierce criticism from tech firms as well as US authorities after it became public. Platforms used to build its database, including Facebook, Twitter, and YouTube, have told Clearview to stop scraping their images, police departments have been told not to use the software, and Vermont’s attorney general’s office recently launched an investigation into the company over allegations that it may have broken data protection rules.