Apple Should Be More Transparent About Security

It seems that Apple has gotten embroiled in a security scandal of one sort or another every few months.

It dodged Heartbleed but was hit by the very embarrassing ‘goto fail’ bug. It was called out for not adequately documenting the uses of diagnostic tools that could have been used to collect data from user devices. Late last year researchers showed off a method for siphoning data via the charging port of iOS devices. A year ago a researcher went public with a method for accessing Apple IDs of developers after he says he got no response from Apple. And then there was this week’s celebrity photo hack, which may have been able to be prevented by making iCloud backups more secure.

In each of these cases, Apple fixed vulnerabilities, released support notes or patched bugs. But in almost all cases, and many others over the years, the company was as opaque as possible about explaining the details of security issues, reluctant to admit to them publicly and very unresponsive to independent security researchers. That leads to misunderstandings and FUD about the extent of the problems and the risks involved for users.

This needs to change or it will continue to happen.

Apple as a company has shown to have deep concern for the privacy of its users. It very frequently goes against common business wisdom to protect user information from third parties. One example of this is Apple refusing to share subscriber info with the publishers of iPad magazines. Apple sources, including both past and present employees, have never shared any information with me that indicates that Apple is interested in anything less than complete user privacy.

But though the company appears on many levels to have the best interests of users at heart, it does not appear to be expending the same kind of deep, detail-oriented effort on security as it is famous for in many other product areas. Apple will obsess over the degree of chamfer on a button, but somehow shoots itself in the foot with silly security mistakes over and over again.

Apple will obsess over the degree of chamfer on a button, but somehow shoots itself in the foot with silly security mistakes over and over again.

I can completely appreciate (and understand) that Apple doesn’t want to make snap judgments about possible security vulnerabilities. Going public with issues before there is a fix, for instance, is a real way to cause more problems than you solve. That’s where the concept of responsible disclosure comes in. But it’s also not taking full advantage of the resources it has available to it, namely independent security researchers.

Many security researchers that I speak to are frustrated with the amount of transparency and communication when it comes to reporting vulnerabilities to Apple — and getting them fixed.

“I would like to see Apple open the lines of communication between their security engineers and security experts in the field,” says researcher Jonathan Zdiarski. “The only avenue many security researchers feel like they have to get big issues addressed is the public forum. The largest company in the world should have an open communication with the experts who evaluate their software for third parties and governments.”

One practical step Apple could take is to either participate in or establish its own paid bug bounty program — in a public or vetted format. This would compensate hackers and security researchers for finding and reporting bugs to Apple. Many of Apple’s contemporaries already do this, including Microsoft, Yahoo and Google.

Currently, Apple publicly identifies people who expose hacks — recently it even began acknowledging members of iPhone jailbreak teams which discovered vulnerabilities that led to those jailbreaks being possible. But it could definitely afford to incentivize those researchers, or at the very least develop a way to communicate with them more openly and effectively.

And it’s not just external security efforts that Apple should work to improve. The ‘iBrute’ hack that was able to rapid-fire passwords at one of Apple’s Find My iPhone login interfaces was an example of something a hard-core security audit would likely have caught. Likewise the ‘goto fail’ SSL bug that left users vulnerable to traffic snooping for months.

Why?

There have been many examples of Apple not heeding early warning signs from the security community about possible vulnerabilities. We should expect better from Apple, as it should itself. This is a company whose devices are used by hundreds of millions of people including government employees and world leaders.

The question I’ve been asking myself over the months since the SSL vulnerability debacle has been ‘why?’ Why is a company who is generally very well-rounded operationally, and like it or not, produces extremely well-liked and complex devices so bad at communicating about security?

The answer I’ve come up with, and this is just a personal theory, is that Apple thinks about security communications in the same way that it thinks about product communications. In other words, it plays its cards incredibly close to the chest at all times by default. These tactics have served it well in the consumer products arena, creating a frenzy of attention around the releases of new devices and services. And that’s great; I don’t mind a little mystery around products as a consumer, even though my job as a reporter is to figure out what Apple could do next and decide whether that’s important enough to talk about publicly.

But in security, this kind of ivory tower comms strategy is a losing game, especially as smartphones become an increasingly information-rich repository of our personal lives. Security will become a feature that is as important as any other widget on our future devices, whichever company they come from. As an industry leader, Apple should re-evaluate the way that it handles security, both internally and externally. Be open, be communicative, be honest and we’ll all be better off.