Subscribe Us

header ads

Apple releases FAQ downplaying privacy concerns over new ‘child protection system’ as watchdogs warn of overreach

Apple pushed back against criticism that its new anti-child sexual abuse detection system could be used for “backdoor” surveillance. The company insisted it won’t “accede to any government’s request to expand” the system’s scope.

The new plan, announced last week, includes a feature that identifies and blurs sexually explicit images received by children using Apple’s ‘Messages’ app – and another feature which notifies the company if it detects any Child Sexual Abuse Material (CSAM) in the iCloud.

The announcement sparked instant backlash from digital privacy groups, who said it “introduces a backdoor” into the company’s software that “threatens to undermine fundamental privacy protections” for users, under the guise of child protection. 

Also on rt.com
File photo: People queue up for iPhones at an Apple store; NSA whistleblower Edward Snowden (inset) is warning about the company's latest privacy-busting feature
Snowden joins battle against iPhone photo-scanning plan as Apple insults privacy activists as ‘screeching voices of the minority’

In an open letter posted on GitHub and signed by security experts, including former NSA whistleblower Edward Snowden, the groups condemned the “privacy-invasive content scanning technology” and warned that the features have the “potential to bypass any end-to-end encryption.”

After an internal memo reportedly referred to the criticism as the “screeching voices of the minority,” Apple on Monday released an FAQ about its ‘Expanded Protections for Children’ system, saying it was designed to apply only to images uploaded to iCloud and not the “private iPhone phone library.” It also will not affect users who have iCloud Photos disabled.

The system, it adds, only works with CSAM image hashtags provided by the National Center for Missing and Exploited Children (NCMEC) and “there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC.”

‘Image hashtags’ refers to the use of algorithms to assign a unique ‘hash value’ to an image – which has been likened to a ‘digital fingerprint’ making it easier for all platforms to remove content deemed harmful.

While Apple insists it screens for image hashes “validated to be CSAM” by child safety organizations, digital rights watchdog Electronic Frontier Foundation (EEF) had previously warned that this would lead to “mission creep” and “overreach.”

Also on rt.com
© Unsplash / Youssef Sarhan
Apple to scan photos on all US iPhones for ‘child abuse imagery’ as researchers warn of impending ‘1984’ – reports

“One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content,” the non-profit warned last week, referring to the Global Internet Forum to Counter Terrorism (GIFCT).

Apple countered that, because it “does not add to the set of known CSAM image hashes,” and because the “same set of hashes” are stored in the OS of every iPhone and iPad users, it is “not possible” to use the system to target users by “injecting” non-CSAM images into it.

“Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it,” the company vows in its FAQ.

“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future,” it added.

However, the company has already been criticized for using “misleading phrasing” to avoid explaining the potential for “false positives” in the system – the “likelihood” of which Apple claims is “less than one in one trillion [incorrectly flagged accounts] per year”.

Like this story? Share it with a friend!

Post a Comment

0 Comments