Apple Responds To Device Scanning Fears, But Questions Still Remain

0
445

Privacy watchdogs around the world were thrown into a frenzy when Apple announced plans to begin scanning Apple devices for images relating to child abuse and exploitation. The tech giant has responded to these fears, claiming that under no circumstances will they expand the program to allow governments access to any data found.

Safeguards in place, but what about precedents?

Apple rejected claims that it would accede to government pressure and make any data available to the powers that be. It instead maintains that the software will seek out only sexual abuse imagery, using a list of banned images provided by the National Centre for Missing and Exploited Children and other child safety organisations.

Reflecting on data privacy for 2019 – Why protecting our data is more  critical than ever | ITProPortal

The list is final, Apple insists and will be the same list when being used to scan iPhone and iPads, meaning individual users cannot be targeted. Furthermore, Apple stated: “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future,” it says.

The “however” to that statement comes in the shape of concessions Apple has made in the past. It removed thousands of apps from the App Store in China to appease the government there, and also moved user data to state-run telecom servers. In countries where encrypted phone calls are not allowed, iPhones without FaceTime are sold instead.

Laughable report suggests most iPhones in China bought by the 'invisible  poor' - 9to5Mac

Apple says that no information will be shared with Apple or law enforcement, but it’s yet to explain how the tool’s focus will remain exclusively on sexually explicit images.

How it’s going to work (allegedly)

The new tools include two features. One is called “communication safety” and uses on-device machine learning to identify and blur sexually explicit images received by children in the Messages app. It will also send a notification to a guardian if a child age 12 or younger views or sends such images.

The second tool is programmed to detect known “banned” images if they are uploaded to iCloud. Apple will be notified if such images are detected, and authorities will be alerted when Apple verifies that such material exists.

Apple Touts Data Privacy in TV Ad Campaign | Technology News

Now that it’s been explained better, are you more comfortable with Apple’s proposal?

SHARE