“Privacy is a fundamental right”: Apple goes all out to prove it’s not trying to steal your data
Privacy has become a central part of Apple’s identity in recent years, arguably catalysed by the company’s clash with the FBI in 2016 over access to the San Bernardino gunman’s iPhone.
Against a backdrop of continued tensions between governments and tech companies over encryption, and developing questions around machine learning and facial recognition, Apple’s stance has solidified into a mantra: Privacy is a fundamental human right.
Apple has now made a further move to emphasise its approach to user privacy, substantially overhauling the privacy section of its website. The company has also published a white paper specifically dedicated to Face ID security, providing details about the facial recognition system’s abilities and limitations.
For context, when Apple first revealed the iPhone X, a number of voices questioned the device’s security measures. US senator Al Franken, for example, asked Tim Cook to “offer clarity” on how Face ID will impact users’ privacy. Others questioned the extent to which third-party apps could use Apple’s technology to their own ends.
In the whitepaper, Apple answers the latter by saying third-party apps can ask the user to authenticate themselves using Face ID, but the apps only get information about whether or not that authentication was a success. They can’t access Face ID or data associated with a person’s face. This suggests that third-party apps won’t be able to use Face ID to build their own biometric tools.
In terms of security, we already knew the neural network behind Face ID works in a secure enclave in the phone’s A11 Bionic chip, and that enrolled infrared images and mathematical representations of a user’s face do not leave the device. The white paper goes into more detail about how this setup is able to respond to changes in a person’s face – such as growing a beard or changing makeup.
Face ID augments its stored mathematical representation over time: “If Face ID fails to recognise you, but the match quality is higher than a certain threshold and you immediately follow the failure by entering your passcode, Face ID takes another capture and augments its enrolled Face ID data with the newly calculated mathematical representation.
“This new Face ID data is discarded after a finite number of unlocks and if you stop matching against it. These augmentation processes allow Face ID to keep up with dramatic changes in your facial hair or makeup use, while minimising false acceptance.”
The paper notes that the system’s neural networks may be updated over time, but that the iPhone X can automatically run stored enrollment images – meaning users don’t need to re-enroll Face ID. It also emphasises that neural networks have been trained on a “representative group of people” to account for different genders, races and ages. Apple has not elaborated further on this, but the comment was likely made following the claims about the Apple Watch not working on tattooed or dark skin back in 2015.
While Apple is being scrupulous about the limits of Face ID, Harmit Kambo, campaigns director of advocacy group Privacy International, told Alphr citizens “need to consider the possible consequences of normalising the use of facial recognition for authenticating identity.
“The technology is developing at a breakneck speed, before we have had time to consider the profound ethical and legal questions about how we should allow our digitised faces to be recognised by our electronic devices, and by companies, by the police, and by CCTV in public spaces.
“Facial recognition technology is increasingly turning us into walking ID cards, so we should be very wary of gaining a small amount of digital convenience, when we face a serious risk of losing so much of our privacy and freedom.”
Differential privacy technology and tracking prevention
Aside from Face ID, another notable part of Apple’s new privacy push is the wider rollout of its so-called differential privacy technology. This comes with the release of macOS High Sierra, which includes an update to Safari that allows the browser to quietly collect your data to identify problematic websites.
The aim for the data capture is to detect sites that divert too much power or crash the browser by using too much memory. Apple’s privacy-savvy version of this system grabs the information without taking any personally identifying data, meaning it should be able to create large-scale reports on memory-hogging sites without any of it being at risk of privacy breaches.
As TechCrunch notes, there’s no separate sign-in for the new data collection. The update is grouped in with the rest of Apple’s device analytics, and these can be switched on via an opt-in box.
The Safari 11 update additionally comes with measures that squash cross-site cookie tracking. Apple calls this Intelligent Tracking Prevention (ITP), and it makes use of machine learning algorithms to impose strict 24-hour time limits on the life spans of persistent cookies.
Advertisers have, predictably, been none too pleased with the move. Six leading agencies recently came together to accuse Apple of a “heavy-handed approach” that could affect the “infrastructure of the modern internet”.
Apple responded saying: “Ad-tracking technology has become so pervasive that it is possible for ad tracking companies to recreate the majority of a person’s web browsing history. This information is collected without permission and is used for ad re-targeting, which is how ads follow people around the internet.”
Regardless of whether or not Apple’s growing emphasis on privacy is a drawn-out PR move, it is indeed having an effect on the internet’s infrastructure, and one that is rippling into a number of other industries.
At a time when advances in machine learning are heralding whole new levels of data gathering and analysis, Apple has identified privacy as a marketable quality – one that it intends to press.