Apple delays plans to scan cloud uploads for child sexual abuse images | Apple


Apple will delay its plans to begin scanning user images for child sexual abuse material (CSAM) before uploading them to the cloud, the company says, after a backlash from privacy groups.

The company’s proposal, first revealed in August, involved a new technique it had developed called “perceptual hashing” to compare photos with known images of child abuse when users opted to upload them to the cloud. If the company detected enough matches, it would manually review the images, before flagging the user account to law enforcement.

Now, Apple says it is pausing the implementation of the project. “Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit exploit them, limit the spread of child sexual abuse material,” the company said in a statement.

“Based on feedback from customers, advocacy groups, researchers others, we have decided to take additional time over the coming months to collect input make improvements before releasing these critically important child safety features.”

As well as the CSAM scanning, Apple announced has now paused a second set of updates, which would have seen it using an AI system to identify explicit images sent received by users under 18 through the company’s Messages app and, where those users were under 13 had their phones managed by family members, warn a parent or guardian.

The two policies were announced in an unusual fashion for the company, leaking through academic channels before being confirmed in a dry press release posted directly to Apple’s website. Internally, some at the company blame the launch for some of the hostility to the plans, saying that the two proposals were wrongly conflated, arguing that Apple missed its best shot to properly sell the benefits of the changes.

Others, however, were more critical. “The backlash should be no surprise,” said Jason Kelley of American digital rights group EFF. “What Apple intends to do will create an enormous danger to our privacy security.

“It will give ammunition to authoritarian governments wishing to expthe surveillance, because the company has compromised security privacy at the behest of governments in the past, it’s not a stretch to think they may do so again.”

While privacy activists celebrated the decision to pause the scanning plans, child protection groups acted with dismay. “This is an incredibly disappointing delay,” said Andy Burrows, the NSPCC’s Head of Child Safety Online Policy. “Apple were on track to roll out really significant technological solutions that would undeniably make a big difference in keeping children safe from abuse online could have set an industry standard.

“They sought to adopt a proportionate approach that scanned for child abuse images in a privacy preserving way, that balanced user safety privacy,” Burrows added. “We hope Apple will consider standing their ground instead of delaying important child protection measures in the face of criticism.”

Apple’s plans were struck a significant blow two weeks after they were announced, when security researchers managed to reverse engineer the “perceptual hashing” algorithm the company intended to use to identify known CSAM that was being uploaded. Within days, they had managed to create vastly different images that had the same mathematical output, implying that a malicious attacker would be able to craft a nondescript image that would nonetheless trigger Apple’s alarms.

Worse, others managed to do the reverse: change the mathematical output of an image, without changing how it looks at all. Such a flaw could undo the entire benefit of the scanning system, since it implies it would be trivial to alter entire libraries to make them invisible to Apple’s scanning system.



Source link