Apple's Mysterious New 'Trust Score' For iPhone Users Leaves Many Unanswered Questions

Like Facebook (and the Chinese Communist Party) before it, Apple is now assigning users of its products a "trust" score that is based on users' call and email habits, the Sun reports. The new ratings were added as part of the latest iOS 12 update, as VentureBeat explains.

Apple’s promise of transparency regarding user data means that any new privacy policy update might reveal that it’s doing something new and weird with your data.

[...]

Alongside yesterday’s releases of iOS 12, tvOS 12, and watchOS 5, Apple quietly updated some of its iTunes Store terms and privacy disclosures, including one standout provision: It’s now using an abstracted summary of your phone calls or emails as an anti-fraud measure.

The provision appears in the iTunes store and privacy windows of iOS and tvOS devices. An Apple spokesperson clarified that the score is meant to stop unauthorized iTunes purchases, but as VentureBeat explains, the trust score is unusual for several reasons - not least of which being that users can't make phone calls or send emails on Apple TVs. Indeed, the only thing Apple customers can say for sure is that the company's disclosure leaves many unanswered questions.

Trust

Aside from the obvious inconsistencies surrounding the Apple TV, it's also unclear how recording and tracking the number of calls or emails made from an iPhone, iPad, or iPod touch will help Apple verify a user's identity. One would think, as Venturebeat points out, that Apple could simply rely on serial numbers or SIM cards. Perhaps the company feels that verifying the device isn't enough, and that it needs to go further to make sure the person using the device is the same. Still, exactly how the company will go about accomplishing this is suspiciously unclear.

And what if a user wants to review their trust score? Well, that's too bad, because Apple will refuse to disclose it, even if federal investigators demand it.

Facebook's trust score is intended to separate "credible" reports of flagged posts from the rest. In Apple's case, the trust score is much more nebulous, which begs the question: Is this just a ruse to allow Apple to cull more valuable user data that it can then monetize without triggering a public backlash? Or is there something more nefarious at play?