History evidences the fact that every overhaul or step forward in technology has brought a questionable bag of user data integrity. A majority of the scenarios involve critical information being analyzed on-the-wire, which means everything is happening anonymously to improve services or in some cases, data is immediately dismissed from the servers as soon as the required result is achieved. Privacy has been always the core concern of any developing technology primarily because these advancements demand user patterns for improving and sustain better overall outcomes and additionally, due to a sudden increase in digital thefts recently.
Google, at their I/O event this year, mentioned that privacy violation has been kept minimal with new and modern products by not individualizing the data collected. That basically denotes that any third party or Google itself won’t be able to track down particular user with specific content. However, they’re still collecting a huge amount of data to refine their services including advertisements. Apple, on the other hand, has been always delicate while addressing these privacy and security concerns. They’ve previously attained the high level of obscuring data mostly by confining their ecosystem with a solitude of conditions and limitations. But in order to compete and stand strong in the market, machine learning and artificial intelligence need to be core factors and Apple has acknowledged those aspects at their developer conference a few days back. So how are they dealing with privacy and user data aggregation with these modern evolutions?
Understanding Differential Privacy and Apple’s Implementations
Apple has packed a statistical and complicated method touted as “differential privacy”. Craig Federighi, Senior VP of Software Engineering at Apple, in fact paused whilst discussing new updates to demonstrate abstract mathematics which acts as a base for this approach, leaving almost every attendee almost awestruck. Differential privacy, at its very core, appends arbitrary characters or “unnecessary noise” to individual entries which in turn, doesn’t let Apple trace back a specific account based on, for instance, the number of times the poop (or chocolate, sure) emoji has been used. The whole concept does, however, involves gathering a large amount of data to scrutinize usage and flourish deep neural networks to function considerably better over time. Apple has been always reluctant about clustering user data but it has to, given the computing advancements being embed in their systems and applications now. Differential privacy eliminates these breaches at some level by not enabling servers to directly store accounts’ personal datasets.
How iOS 10 is Utilizing it?
Although, there’s a huge risk involved – the more queries you request, the more information you’re sending out. As a result, one even tiny leak can ruin the flow because Apple is collecting individual data before camouflaging it. Moreover, complex algorithms will demand considerably more noise to be added leading a significant trade-off between accuracy and privacy. Currently, differential privacy is being rooted in four crucial iOS 10 areas – iMessage enhancements, predictive text, Spotlight deep link suggestions and Lookup Hints in Notes. Do note the redesigned Photos app doesn’t utilize differential privacy to operate advanced computer vision.
Balancing the Right Amount of Privacy and Accuracy
iMessage has been dramatically uplifted with differential privacy lockdowns applied with “emoji suggestions” that will have to collect accumulate a sizeable chunk of data for churning out better predictions. Being crowdsourced, these predictions will depend on how other people are replacing words with emojis. The exact phenomenon applies to the smarter predictive text and Lookup hints in the Notes app. Spotlight, on the other hand, works at a global level and developers will be able to submit user data making privacy protection an extremely crucial element. Applications can make use of these submissions to come up with better music recommendations but that’ll be based on aggregated data instead of what you’ve been listening which is the case in iOS 9. Developers, however, will have to restrict themselves to a formula called “privacy budget” that’s needed to balance accuracy and privacy accordingly. If a developer opts for better privacy, the end result will get affected and vice-versa. They need to follow Apple’s stated guidelines to beneficially execute machine learning. To get a better understanding of this concept, head over to this link.
That’s fine, but is it any good?
The prime question remains here though, is “how good will this all be?”. That’s what really matters to the user in the end. If it’s not performing satisfactory within those set of borders, there’s no use of implementing these algorithms. The short answer is that it won’t work as flawlessly as you would expect initially but things will eventually get better as more users begin submitting queries. Apple is still miles away when it comes collecting user data when compared to companies like Google, Facebook or even Amazon for that matter. The Cupertino giant is progressing but just isn’t ready yet to sacrifice privacy in favor of modern advancements. They’re modifying tactics and coming up with better solutions instead of going all out which is a good thing. It will be definitely tough for them to compete but in a hunt for success, they might cook up the next best recipe with machine learning and privacy as ingredients.