A former Chief Technical Officer at Microsoft has proposed a solution to the FBI's supposed “going dark” problem — the use of secure encryption tools by criminals and terrorists. His solution is to provide a cryptographic back door that would be locked with a private key that the phone manufacturer would maintain and guard with the same fanatically obsessive security with which Apple or Microsoft guards its own update-signing keys.
“Cracking the Crypto War”
Steven Levy, Wired, April 25, 2018
Say the FBI needs the contents of an iPhone. First the feds have to actually get the device and the proper court authorization to access the information it contains — Ozzie's system does not allow the authorities to remotely snatch information. With the phone in its possession, they could then access, through the lock screen, the encrypted PIN and send it to Apple. Armed with that information, Apple would send highly trusted employees into the vault where they could use the private key to unlock the PIN. Apple could ten send that no-longer-secret PIN back to the government, who can use it to unlock the device.
Robert Graham points out several vulnerabilities in this scheme:
“No, Ray Ozzie Hasn't Solved Crypto Backdoors”
Robert Graham, Errata Security, April 25, 2018
The vault doesn't scale
… The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack. …
Cryptography is about people more than math
… How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?
You think these things are theoretical, but they aren't. …
Locked phones aren't the problem
Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption [a] crime, then criminals will still use encryption.
That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.
In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them. …
The FBI isn't necessarily the problem
… Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably by used by repressive states for what we believe would be “illegitimate” law enforcement requests.
Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn't see himself [as] what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living [under] oppressive governments around the world.
Ozzie pretends the problem is political, that he's created a solution that appeases both sides. He hasn't. He's solved the problem we already know how to solve. He's ignored all the problems we struggle with, the problems that we claim make secure backdoors essentially impossible. I've listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he's going to have to work harder to appease doubting cryptographers.
Matthew Green makes a similar case, even more persuasively (because his rhetoric is less heated until he reaches the climax of his argument).
“A Few Thoughts on Ray Ozzie's ‘Clear’ Proposal”
Matthew Green, A Few Thoughts on Cryptographic Engineering, April 26, 2018
The richest and most sophisticated phone manufacturer in the entire world tried to build a processor that achieved goals similar to those Ozzie requires. And as of April 2018, after five years of trying, they have been unable to achieve this goal — a goal that is critical to the security of the Ozzie proposal as I understand it. …
The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs [Hardware Security Modules] are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line [in] an environment where we believe with high probability that the system will fail.
There's a pretty good consensus among security and legal researchers with a variety of political perspectives. Here are some more examples:
“Ray Ozzie's Proposal: Not a Step Forward”
Steven M. Bellovin, SMBlog, April 25, 2018
“Building on Sand Isn't Stable: Correcting a Misunderstanding of the National Academies Report on Encryption”
Susan Landau, Lawfare, April 25, 2018
An exceptional-access system is not merely a complex mathematical design for a cryptosystem; it is a systems design for a complex engineering task. … An exceptional-access system would have to operate in real time, authenticate multiple law-enforcement agencies … ensure the accuracy of the authentication system and its ability to withstand attacks, and handle frequent updates to hardware, the operating system, phones, and more. The exceptional-access system would have to be flexible enough to handle the varied architectures of different types of phones, security systems and update processes. …
The fundamental difference between building a sound cryptosystem and a secure exceptional-access system is the difference between solving a hard mathematics problem … and producing a sound engineering solution to a difficult systems problem with constantly changing parts and highly active adversaries.
“Ray Ozzie's Key-Escrow Proposal Does Not Solve the Encryption Debate — It Makes It Worse”
Riana Pfefferkorn, Center for Internet and Society, April 26, 2018
The access mechanism Ozzie proposes would act as a kind of tamper-evident seal that, once “broken” by law enforcement, renders the phone unusable. Ozzie touts this as a security feature for the user, not a bug. …
This “feature” alone should consign Ozzie's idea to the rubbish heap of history. … Ozzie's scheme would basically require a self-destruct function in every smartphone sold, anywhere his proposal became law, that would be invoked thousands and thousands of times per year, with no compensation to the owners. That proposal does not deserve to be taken seriously — not in a democracy, anyway. …
It would give the police a way to lean on you to open [your] phone for them. “You can make it easy on yourself by unlocking the phone and giving it to us,” they might say. “But hey, we don't need you. We can go to a judge and get a warrant, and then we can just have Apple unlock it for us. … That would brick it forever, so you couldn't use your phone anymore, even after we gave it back to you eventually. You'd have to go out and buy a new one.”
A security researcher who is also a Panera Bread customer and has a customer account at the Panera Web site discovered a vulnerability that allowed any account holder to download Panera's dossier about any other account holder. He immediately sent an e-mail to firstname.lastname@example.org, but it bounced. He looked up the company's chief of security, who ignored his Twitter, LinkedIn, and e-mail messages until the researcher found a third party to effect a proper introduction. At that point the chief of security explained that he thought the earlier messages had been either a hoax or an scammer's attempt to drum up business.
Once communication had been established, the researcher asked for and received the chief of security's PGP public key and sent him the encrypted version of a full report on the vulnerability. The chief of security did not reply to the researcher's repeated inquiries about whether he had received and successfully decrypted the report, but ultimately declared, “Thank you for the information we are working on a resolution.”
The researcher then checked every month or so to see whether the vulnerability had been fixed. It never was. After a few months, the researcher published the details and called in some prominent reporters in the field (notably Brian Krebs of Krebs on Security and Dissent Doe of DataBreaches.net). Krebs's article on the subject managed to elicit a reaction from Panera: They took their Web site down for a while and pretended to fix the problem, and published a press release saying that the breach affected about ten thousand customers. When it came back up, an investigative team at HoldSecurity (prompted by Krebs) found that the breach affected all forty-one million of Panera's account holders, and moreover that Panera had made the same kind of mistake in many other places on its Web site, leaking a lot more data about the company.
“No, Panera Bread Doesn't Take Security Seriously”
Dylan Houlihan, Medium, April 2, 2018
1. We could collectively afford to be more critical of companies when they issue reactionary statements to do damage control. We need to hold them to a higher standard of accountability. I honestly don't know what that looks like for the media, but there has to be a better way to do thorough, comprehensive reporting on this.
2. We need to collectively examine what the incentives are that enabled this to happen. I do not believe it was a singular failure with any particular employee. It's easy to point to certain individuals, but they do not end up in those positions unless that behavior is fundamentally compatible with the broader corporate culture and priorities.
3. If you are a security professional, please, I implore you, set up a basic page describing a non-threatening process for submitting security vulnerability disclosures. Make this process obviously distinct from the, “Hi I think my account is hacked” customer support process. Make sure this is immediately read by someone qualified and engaged to investigate those reports, both technically and practically speaking. You do not need to offer a bug bounty or a reward. Just offering a way to allow people to easily contact you with confidence would go a long way.
“Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains”
Tegjyot Singh Sethi and Mehmed Kantardzic, arXiv, March 23, 2017
Machine learning operates under the assumption of stationarity, i.e. the training and testing distributions are assumed to be identically and independently distributed … . This assumption is often violated in an adversarial setting, as adversaries gain nothing by generating samples which are blocked by a defender's system. …
In an adversarial environment, the accuracy of classification has little significance, if an attacker can easily evade detection by intelligently perturbing the input samples.
Most of the paper deals with strategies for probing black-box deciders that are only accessible as services, through APIs or Web interfaces. The justification for the strategies is more heuristic than theoretical, but the authors give some evidence that they are good enough to generate adversarial examples for a lot of real-world black-box deciders.
What I liked most about the paper was the application of “the security mindset.”