An upcoming iPhone-scanning system to stop child sexual abuse imagery online will never become a tool for governments, according to Apple.
“Let us be clear, this technology is limited to detecting CSAM (child sexual abuse materials) stored in iCloud and we will not accede to any government’s request to expand it,” the company said in a new FAQ on the technology.
Last week, Apple introduced its upcoming approach to detecting child sexual abuse imagery from circulating on its storage platform, iCloud. Companies including Facebook, Google, and Twitter do similar widescale scanning to take down the illegal content. However, Apple’s approach is causing controversy because the company decided to scan for sexual imagery with the help of the iPhone itself.
The scanning will only occur when an iPhone uploads or backs up images to iCloud Photos. It’ll also only target images that contain hashes, or digital fingerprints, of known child sexual abuse images indexed in a national database. If an iCloud account crosses a “threshold” for enough CSAM content, the company’s team of human reviewers will then investigate, disable the user’s account if illegal activity has been confirmed, and then send a report to the National Center for Missing and Exploited Children (NCMEC).
The system arrives with iOS 15 this fall, and has been built with privacy in mind, according to Apple. “Existing techniques as implemented by other companies scan all user photos stored in the cloud,” the company writes in the FAQ. “This creates privacy risk for all users. CSAM detection in iCloud Photos provides significant privacy benefits over those techniques by preventing Apple from learning about photos unless they both match to known CSAM images and are included in an iCloud Photos account that includes a collection of known CSAM.”
Nevertheless, the company’s approach is alarming security researchers and privacy groups over concerns the scanning could one day expand to other content. Even the head of WhatsApp has weighed in, saying last week: “We’ve had personal computers for decades and there has never been a mandate to scan the private content of all desktops, laptops, or phones globally for unlawful content. It’s not how technology built in free countries works.”
However, Apple’s FAQ attempts to dispel the privacy worries. It points out users can easily opt out of the system by disabling iCloud Photos. The system will also never scan photos in an iPhone’s media library on the device.
In addition, the company’s technology has been designed to prevent scanning for content outside of child sexual abuse materials. “There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC,” Apple writes.
Recommended by Our Editors
On the issue of governments one day demanding access to the iPhone scanning technology, the company says: “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.
“The system is designed to be very accurate, and the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year,” Apple adds.
The FAQ essentially asks consumers to trust the company, which has in recent years made its commitment to privacy a marketing point to sell new iPhones. Still, the security community is pointing out the lack of transparency and accountability with the new system. “When you boil it down, Apple has proposed your phone become black box that may occasionally file reports on you that may aggregate such that they contact the relevant authorities,” tweeted Sarah Jamie Lewis, the executive director of the Open Privacy Research Society.