The national intelligence services of the United States, the United Kingdom, Australia, Canada, and New Zealand have joined forces to support legislation requiring makers of encryption software to incorporate defects into their products so as to allow surveillance agencies (such as law-enforcement and espionage operations) to seize and decrypt communications between users of the software.
“Statement of Principles on Access to Evidence and Encryption”
Department of Home Affairs, Australian Government, August 29, 2018
The Governments of the Five Eyes encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services they operate in our countries. …
Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.
Of course, in the United States, any government access to private communications is unlawful, indeed unconstitutional, unless it is supported by a warrant, endorsed by a judge of the relevant jurisdiction, “upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” Unfortunately, in this context, ‘lawful’ is simply a stylistic variant of ‘government’, not referring to any actual law. The threat to resort to system cracking if backdoor entries to encryption systems aren't provided reinforces this obvious indifference to the rights of citizens and subjects.
“Five-Eyes Intelligence Services Choose Surveillance over Security”
Bruce Schneier, Schneier on Security, September 6, 2018
To put it bluntly, this is reckless and shortsighted. I've repeatedly written about why this can't be done technically, and why trying results in insecurity. But there's a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a “defense dominant” strategy for securing the Internet and everything attached to it.
This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communication systems puts us all at risk. We need to understand that these systems are too critical to our society and — now that they can affect the world in a direct physical manner — affect our lives and property as well.
Cory Doctorow, Boing Boing, September 5, 2018
It is impossible to overstate how bonkers the idea of sabotaging cryptography is to people who understand information security. If you want to secure your sensitive data either at rest — on your hard drive, in the cloud, on that phone you left on the train last week and never saw again — or on the wire, when you're sending it to your doctor or your bank or to your work colleagues, you have to use good cryptography. Use deliberately compromised cryptography, that has a back door that only the “good guys” are supposed to have the keys to, and you have effectively no security. You might as well skywrite it as encrypt it with pre-broken, sabotaged encryption. …
Cryptography [is] the basis for all trust and security in the 21st century.
“Botched CIA Communications System Helped Blow Cover of Chinese Agents”
Zach Dorfman, Foreign Policy, August 15, 2018
As a rule of thumb, it is now about three orders of magnitude more difficult to defend against computer and network intrusions than to carry out the intrusions themselves.
It was considered one of the CIA's worst failures in decades: Over a two-year period starting in late 2010, Chinese authorities systematically dismantled the agency's network of agents across the country, executing dozens of suspected U.S. spies. …
Now, nearly eight years later, it appears that the agency botched the communication system it used to interact with its sources, according to five current and former intelligence officials. …
When CIA officers begin working with a new source, they often use an interim covert communications system — in case the person turns out to be a double agent.
The communications system used in China during this period was internet-based and accessible from laptop or desktop computers, two of the officials said.
This interim, or “throwaway,” system, an encrypted digital program, allows for remote communication between an intelligence officer and a source, but it is also separated from the main communication system used with vetted sources, reducing the risk if an asset goes bad.
Although they used some of the same coding, the interim system and the main covert communication platform used in China at this time were supposed to be clearly separated. In theory, if the interim system were discovered or turned over to Chinese intelligence, people using the main system would still be protected — and there would be no way to trace the communication back to the CIA. But the CIA's interim system contained a technical error: It connected back architecturally to the CIA's main covert communications platform. When the compromise was suspected, the FBI and NSA both ran “penetration tests” to determine the security of the interim system. They found that cyber experts with access to the interim system could also access the broader cover communications system the agency was using to interact with its vetted sources, according to the former officials. …
U.S. intelligence officers were also able to identify digital links between the covert communications system and the U.S. government itself, according to one formal official — links the Chinese agencies almost certainly found as well. These digital links would have made it relatively easy for China to deduce that the covert communications system was being used by the CIA. In fact, some of these links pointed back to parts of the CIA's own website, according to the former official.
Some security researchers have discovered a new attack on PGP. They have written a paper explaining how it works and plan to publish it tomorrow, but the Electronic Frontier Foundation has learned enough about it that they are sounding an alarm even before the details are public:
“Attention PGP Users: New Vulnerabilities Require You to Take Action Now”
Danny O'Brien and Gennie Gebhart, Deeplinks, Electronic Frontier Foundation, May 13, 2018
A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with this research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages. …
Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.
The story includes links to instructions provided by the EFF on how to temporarily disable the PGP plug-ins for Thunderbird, Apple Mail, and Outlook.Update (2018-05-14⊺11:34:32-05:00)
The discoverers of the attack now have a Web site up and have published a draft of their paper there:
“Efail: Breaking S/MIME and OpenPGP Email Encryption Using Exfiltration Channels”
Damian Poddebniak, Christian Dresen, Jens Miller, Fabian Ising, Sebastian Schinzel, Simon Friedberger, Juraj Somorovsky, and Jörg Schwenk, May 14, 2018
There are actually two vulnerabilities. One exploits peculiarities, arguably errors, in mail user agents that parse and interpret HTML in messages after they have been decrypted. The other exploits a weakness in the OpenPGP standard: Under certain circumstances, the standard doesn't require integrity checks and doesn't specify what a decryption algorithm should do when an integrity check fails. Consequently, many mail user agents do the wrong thing when they receive a message that has been tampered with.
The Electronic Frontier Foundation has a follow-up, and other security authorities are providing quick analysis as well:
“Not So Pretty: What You Need to Know about E-Fail and the PGP Flaw”
Erica Portnoy, Danny O'Brien, and Nate Cardozo, Deeplinks, Electronic Frontier Foundation, May 14, 2018
“Some Notes on eFail”
Robert Graham, Errata Security, May 14, 2018
“New Vulnerabilities in Many PGP and S/MIME Enabled Email Clients”
Matthew Green, Twitter, May 14, 2018
A former Chief Technical Officer at Microsoft has proposed a solution to the FBI's supposed “going dark” problem — the use of secure encryption tools by criminals and terrorists. His solution is to provide a cryptographic back door that would be locked with a private key that the phone manufacturer would maintain and guard with the same fanatically obsessive security with which Apple or Microsoft guards its own update-signing keys.
“Cracking the Crypto War”
Steven Levy, Wired, April 25, 2018
Say the FBI needs the contents of an iPhone. First the feds have to actually get the device and the proper court authorization to access the information it contains — Ozzie's system does not allow the authorities to remotely snatch information. With the phone in its possession, they could then access, through the lock screen, the encrypted PIN and send it to Apple. Armed with that information, Apple would send highly trusted employees into the vault where they could use the private key to unlock the PIN. Apple could ten send that no-longer-secret PIN back to the government, who can use it to unlock the device.
Robert Graham points out several vulnerabilities in this scheme:
“No, Ray Ozzie Hasn't Solved Crypto Backdoors”
Robert Graham, Errata Security, April 25, 2018
The vault doesn't scale
… The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack. …
Cryptography is about people more than math
… How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?
You think these things are theoretical, but they aren't. …
Locked phones aren't the problem
Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption [a] crime, then criminals will still use encryption.
That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.
In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them. …
The FBI isn't necessarily the problem
… Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably by used by repressive states for what we believe would be “illegitimate” law enforcement requests.
Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn't see himself [as] what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living [under] oppressive governments around the world.
Ozzie pretends the problem is political, that he's created a solution that appeases both sides. He hasn't. He's solved the problem we already know how to solve. He's ignored all the problems we struggle with, the problems that we claim make secure backdoors essentially impossible. I've listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he's going to have to work harder to appease doubting cryptographers.
Matthew Green makes a similar case, even more persuasively (because his rhetoric is less heated until he reaches the climax of his argument).
“A Few Thoughts on Ray Ozzie's ‘Clear’ Proposal”
Matthew Green, A Few Thoughts on Cryptographic Engineering, April 26, 2018
The richest and most sophisticated phone manufacturer in the entire world tried to build a processor that achieved goals similar to those Ozzie requires. And as of April 2018, after five years of trying, they have been unable to achieve this goal — a goal that is critical to the security of the Ozzie proposal as I understand it. …
The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs [Hardware Security Modules] are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line [in] an environment where we believe with high probability that the system will fail.
There's a pretty good consensus among security and legal researchers with a variety of political perspectives. Here are some more examples:
“Ray Ozzie's Proposal: Not a Step Forward”
Steven M. Bellovin, SMBlog, April 25, 2018
“Building on Sand Isn't Stable: Correcting a Misunderstanding of the National Academies Report on Encryption”
Susan Landau, Lawfare, April 25, 2018
An exceptional-access system is not merely a complex mathematical design for a cryptosystem; it is a systems design for a complex engineering task. … An exceptional-access system would have to operate in real time, authenticate multiple law-enforcement agencies … ensure the accuracy of the authentication system and its ability to withstand attacks, and handle frequent updates to hardware, the operating system, phones, and more. The exceptional-access system would have to be flexible enough to handle the varied architectures of different types of phones, security systems and update processes. …
The fundamental difference between building a sound cryptosystem and a secure exceptional-access system is the difference between solving a hard mathematics problem … and producing a sound engineering solution to a difficult systems problem with constantly changing parts and highly active adversaries.
“Ray Ozzie's Key-Escrow Proposal Does Not Solve the Encryption Debate — It Makes It Worse”
Riana Pfefferkorn, Center for Internet and Society, April 26, 2018
The access mechanism Ozzie proposes would act as a kind of tamper-evident seal that, once “broken” by law enforcement, renders the phone unusable. Ozzie touts this as a security feature for the user, not a bug. …
This “feature” alone should consign Ozzie's idea to the rubbish heap of history. … Ozzie's scheme would basically require a self-destruct function in every smartphone sold, anywhere his proposal became law, that would be invoked thousands and thousands of times per year, with no compensation to the owners. That proposal does not deserve to be taken seriously — not in a democracy, anyway. …
It would give the police a way to lean on you to open [your] phone for them. “You can make it easy on yourself by unlocking the phone and giving it to us,” they might say. “But hey, we don't need you. We can go to a judge and get a warrant, and then we can just have Apple unlock it for us. … That would brick it forever, so you couldn't use your phone anymore, even after we gave it back to you eventually. You'd have to go out and buy a new one.”
Some of the bureaucrats in charge of the federal government's efforts to recruit and then punish domestic terrorists have been giving public speeches in which they advocate “responsible encryption.” It seems that encryption is an occasionally effective way for American citizens to protect their rights under the First, Fourth, Fifth, and Sixth Amendments against eavesdropping and unwarranted searches and seizures by government officials and their corporate accomplices. The G-men would prefer us to use only encryption systems that register plaintexts, keys, or both either with service providers or specialized escrow companies that can be relied on to yield our protected information to the authorities whenever they demand it.
A researcher at the Stanford Center for Internet and Society lists the ways in which such escrow systems undermine their users' security:
(A) There will be so many requests from counterterrorism and law-enforcement officials that the organization charged with the responsibilities of escrow will find it difficult to manage and restrict the distribution of their own keys:
The exceptional-access decryption key would have to be accessible by far more people than those currently entrusted with a software update signing key. That puts the key at risk, and also makes it harder to detect inappropriate use of the key. … Increasing frequency of use and the number of people with access unavoidably means increasing the risk of human error (such as carelessly storing or leaking the key) or malfeasance (such as an employee releasing the key to an unauthorized outside party in response to extortion or bribery).
(B) The organization charged with the responsibilities of escrow will find it difficult to reliably distinguish authentic requests for access to escrowed information from requests generated by attackers, particularly since counterterrorism and law-enforcement officials are likely to grow impatient with strict authentication procedures and look for ways to bypass them even when making legitimate requests.
(C) Attackers, knowing that a device uses an escrowed-key encryption mechanism, will seek out vulnerabilities related to the implementation of this mechanism:
The information the attacker obtains from the device could then be sold or otherwise exploited. That is, compromised devices would lead to identity theft, intellectual property misappropriation, industrial espionage, and other economic harms to American individuals and businesses. These are the very harms from which phone manufacturers are presently protecting Americans by strengthening their device encryption in recent years. An exceptional-access mandate would not only hurt U.S. smartphone manufacturers and app makers, it would end up taking a toll on other people and industries as well.
The premise is that end-to-end encryption systems are not subject to these particular vulnerabilities because they do not provide the access mechanisms (and so do not contain the hardware or software support) in which the vulnerabilities would be found.
(D) Users who want to protect their information can apply a second level of encryption, using a different key, before turning it over to the application that escrows its key, or use other techniques (such as steganography) to conceal information. Alternatively, such users can switch to apps made in free countries or develop their own, using free-software libraries that are already widely available. Any of these approaches would render the escrowed-key system pointless.
If the most commonly-used devices or messaging apps are exceptional access-compliant, then not only will the majority of bad actors — the average, unsophisticated criminals — be using weakened encryption, so will the majority of innocent people. By imposing an exceptional-access mandate, law enforcement officials charged with protecting the public would create a world wherein the shrewdest wrongdoers have better security than the innocents they victimize, who, in turn, would by law have worse smartphone and communications security than they do now, leaving them even more vulnerable to those same criminals.
“The Risks of ‘Responsible Encryption’”
Riana Pfefferkorn, Stanford Center for Internet and Society, February 5, 2018