Apple has always taken pride in its security. The iPhone is considered one of the safest smartphones for data privacy in the market. Inrelation to that, the tech giant recently announced a new feature that detects child exploitation images and videos as part of its campaign to protect children and fight child sexual abuse.
As good as the intention sounds, some security advocates are worried it could actually affect users' data privacy. Here's what we know about the new feature so far.
New Apple iOS 15 Feature to Strengthen Child Protection
Apple announced that they have expanded their protections for children to limit the spread of Child Sexual Abuse Material or CSAM. The tech company aims to protect children from predators who use any means of communication to recruit and exploit them.
Three changes will be rolling out later this year to help with their goal, said The Verge.
The first of the three changes involves Apple's Search app and Siri. If a user searches for anything related to child sexual abuse, Apple will direct them to resources to explain that interest in topics like these is harmful and problematic and link them to resources from partners to get help.
Apple is also providing additional resources to help both children and parents stay safe online and get help with unsafe situations.
The two other updates are the ones receiving more scrutiny and backlash.
One change adds a parental control option to messages, obscuring sexually explicit pictures for users under 18 and sending parents an alert if a child 12 or under views or sends these types of photos, The Verge explained.
The other feature scans iCloud Photos data, images and videos to find CSAM and report it to Apple moderators who can then pass it on to the National Center for Missing and Exploited Children or NCMEC. The feature is specifically designed to protect user privacy while finding illegal content, however, critics think it could be a security backdoor.
Is Apple Snooping Through Your Messages and Photos?
Concerns as to whether or not these features can be manipulated and abused have been raised.
Apple has stood firm, saying it designed the features to keep the tools from being twisted, Cnet said. The system does not actually scan the photos at face value, rather it checks for matches between hash codes. The hash database is stored on the phone and so nothing is sent to the cloud. Apple also noted how the scans are done on the device so security researchers can audit the way it works more easily.
If users are concerned about their baby's bathtub photos accidentally tripping the system, Apple said the likelihood of a false positive is less than one in 1 trillion per year. The hash codes are checked against a known database of child exploitation videos and photos and should anything be discovered, the flagged data is then reviewed by an actual human before making the report to NCMEC, Apple explained.
As for the messages, the feature is designed so that Apple does not have access to the messages, but it does swiftly alert the assigned parent should anything come up.
Apple still maintains that it always keep their users' privacy in mind. The new features are Apple's answer to the child exploitation problem happening globally. The company has received praise for the updates while some critics find that the system could be very much exploited.
These updates will be rolled out later this year across the Apple ecosystem with the new iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.