Skip to main content

Apple says its iCloud scanning will rely on multiple child safety groups to address privacy fears

Apple says its iCloud scanning will rely on multiple child safety groups to address privacy fears

/

It’s supposed to stop governments from abusing the system

Share this story

Illustration by Alex Castro / The Verge

Apple has filled in more details around its upcoming plans to scan iCloud Photos for child sexual abuse material (CSAM) via users’ iPhones and iPads. The company released a new paper delving into the safeguards it hopes will increase user trust in the initiative. That includes a rule to only flag images found in multiple child safety databases with different government affiliations — theoretically stopping one country from adding non-CSAM content to the system.

Apple’s upcoming iOS and iPadOS releases will automatically match US-based iCloud Photos accounts against known CSAM from a list of image hashes compiled by child safety groups. While many companies scan cloud storage services remotely, Apple’s device-based strategy has drawn sharp criticism from some cryptography and privacy experts.

The paper, called “Security Threat Model Review of Apple’s Child Safety Features,” hopes to allay privacy and security concerns around that rollout. It builds on a Wall Street Journal interview with Apple executive Craig Federighi, who outlined some of the information this morning.

In the document, Apple says it won’t rely on a single government-affiliated database — like that of the US-based National Center for Missing and Exploited Children, or NCMEC — to identify CSAM. Instead, it will only match pictures from at least two groups with different national affiliations. The goal is that no single government could have the power to secretly insert unrelated content for censorship purposes, since it wouldn’t match hashes in any other database.

Apple has referenced the potential use of multiple child safety databases, but until today, it hadn’t explained the overlap system. In a call with reporters, Apple said it’s only naming NCMEC because it hasn’t yet finalized agreements with other groups.

The paper confirms a detail Federighi mentioned: initially, Apple will only flag an iCloud account if it identifies 30 images as CSAM. This threshold was picked to provide a “drastic safety margin” to avoid false positives, the paper says — and as it evaluates the system’s performance in the real world, “we may change the threshold.”

It also provides more information on an auditing system that Federighi mentioned. Apple’s list of known CSAM hashes will be baked into iOS and iPadOS worldwide, although the scanning system will only run in the US for now. Apple will provide a full list of hashes that auditors can check against child safety databases, another method to make sure it’s not secretly matching more images. Furthermore, it says it will “refuse all requests” for moderators to report “anything other than CSAM materials” for accounts that get flagged — referencing the potential for using this system for other kinds of surveillance.

Federighi acknowledged that Apple had introduced “confusion” with its announcement last week. But Apple has stood by the update itself — it tells reporters that although it’s still finalizing and iterating on details, it hasn’t changed its launch plans in response to the past week’s criticism.