The Department of Justice has indicted a former aide to the Senate Intelligence Committee, James Wolfe, and several journalists, including Ali Watkins of the New York Times. The indictment is based on inferences from detailed and comprehensive surveillance of Wolfe and Watkins and many of their colleagues and friends, including interception of their telephone communications, e-mail, travel and financial records, and so on.
“Ex-Senate Aide Charged in Leak Case Where Times Reporter's Records Were Seized”
Adam Goldman, Nicholas Fandos, and Katie Benner, The New York Times, June 7, 2018
“Trump's Justice Department Escalates Its Disturbing Crackdown on Leaks by Seizing New York Times Reporter's Phone and Email Records”
Trevor Timm, Freedom of the Press Foundation, June 7, 2018
Some of the exchanges were transmitted through Signal, an application that uses strong end-to-end encryption. The second article speculates that the feds must have acquired these messages by seizing Wolfe's mobile phone and breaking into it.
In 2016, the Federal Bureau of Investigation felt so strongly that it needed to access the contents of a suspected terrorist's encrypted iPhone that it persuaded the Department of Justice to lean on Apple, threatening to prosecute under the All Writs Act of 1789 unless Apple agreed to develop a tool for breaking into encrypted iPhones and to provide it to the FBI. Apple declined, and eventually the FBI hired a company that had already developed such a tool to do the job for them, thus eliminating the threat against Apple. (The terrorist's iPhone contained nothing of interest.)
This episode struck people as sufficiently stupid and disgusting that the Department of Justice asked its Office of the Inspector General to prepare a report explaining exactly what happened and why. The report is now available (with redactions):
“A Special Inquiry Regarding the Accuracy of FBI Statements concerning Its Capabilities to Exploit an iPhone Seized during the San Francisco Terror Attack Investigation”
Oversight and Review Division, Office of the Inspector General, U.S. Department of Justice, March 2018
According to the report, one branch of the FBI, the Remote Operations Unit (ROU) of the Operational Technology Division, had already hired another outside company to develop a tool that would break into that iPhone, and this vendor successfully demonstrated the tool on March 16, 2016. However, the ROU didn't tell anyone else in the FBI about this accomplishment, and the separate branch of the FBI that was responsible for the investigation of the suspected terrorist never asked the ROU about it, partly, perhaps, because the FBI wanted to establish a legal precedent for bullying Apple and other tech companies into doing their work for them, but also because most of the stuff that the ROU develops is classified, and using classified tools to acquire key evidence in a criminal case is a generally a bad idea, since the discovery process can easily reveal the existence and nature of those tools.
In practice, the Department of Justice frequently uses classified tools to acquire key evidence in criminal cases because they can often get away with it, but it still isn't a good idea, and the FBI shouldn't promote it.
However, the Inspector General's report recommends that the various branches of the FBI shouldn't withhold information about hacking tools from one another and encourages the FBI to complete the reorganization that it has already begun “to consolidate resources to address the ‘Going Dark’ problem and improve coordination between the units that work on computer and mobile devices.”
The Cryptography Fellow at the Stanford Center for Internet and Society points out the foreseeable consequences:
“The Dark Side of the ‘Apple vs. FBI’ OIG Report”
Riana Pfefferkorn, Center for Internet and Society, April 18, 2018
If the OIG report prompts the FBI to give the CEAU [Cryptographic and Electronic Analysis Unit], which focuses on criminal matters, more access to tools developed or acquired by ROU, which focuses on national security matters, that could have a detrimental effect on federal criminal cases. When seeking search and seizure warrants, the FBI may not fully explain to judges that they are asking for authorization to use sophisticated, technological techniques to extract evidence from defendants' devices. In the resulting prosecutions, the government may refuse to disclose information about the classified technique, or even its existence, to defense counsel or experts. That secrecy will impair the court's truth-seeking function as well as the defendant's ability to mount a defense.
What is more, removing the divide between criminal and national security tools could ultimately hurt the FBI, too. If courts do order disclosure of the FBI's techniques in criminal cases, the FBI's national security and intelligence units might decide that they cannot risk using those techniques anymore. That is a significant reason why the wall was there in the first place: to protect those missions. …
It is ironic that the OIG report into the FBI's behavior during Apple vs. FBI may lead to the FBI's criminal investigators achieving that case's objective: getting more capabilities to crack into digital devices.
The ethical and prudential faults in this situation just go on and on: A company that discovers flaws in iPhone security has an ethical responsibility to report those flaws to Apple so that they can be fixed, instead of concealing the vulnerabilities and selling exploitation tools to other parties. The FBI certainly should not be hiring companies to produce such tools. If it does acquire such tools, the FBI also has an ethical responsibility to report the flaws to Apple instead of exploiting them. It also has an ethical responsibility to try to get them declassified before exploiting them, since a domestic law-enforcement organization does not need and should not have national-security clearances and should not rely on them in day-to-day operations if they do have them.
If the Remote Operations Unit does acquire and exploit classified system-cracking tools, it has a prudential obligation to make its resources available wherever they are needed within the agency and so should not conceal such tools from other branches of the FBI. But the CEAU should not use such tools in criminal investigations, for the reasons that Pfefferkorn explains: Doing so breaks the prosecution of such cases. Indeed, the Department of Justice should not even use evidence acquired through the use of classified system-cracking tools, precisely because judges should exclude such evidence and any inferences based on it.
Our institutions are so thoroughly shot through with unethical, unprofessional, and corrupt misbehavior that it is hard even to figure out where a reform project should begin.
The case that was supposed to determine whether the government can force Microsoft to turn over its users' data stored on servers in a foreign country is effectively over. Both sides have agreed that the case is moot now that the Clarifying Overseas Use of Data Act is law and the Department of Justice has procured a warrant under that law.
“What Will Microsoft And Ireland Do with the New CLOUD Act Warrant?”
Albert Gidari, Center for Internet and Society, April 9, 2018
The author raises several possible courses of action: It could try to quash the warrant somehow, or it could rely on the Irish government (possibly prompted by Microsoft) to insist that the United States work through the Mutual Legal Assistance Treaty that is supposed to ensure bilateral cooperation in such cases, or it could just roll over and give up the customer data.
My guess is that Microsoft will choose option C. It has already gotten what it wanted out of this lawsuit: a public-relations boost for its claim to protect users' data, some spiteful retaliation against the Department of Justice, and no real change in its close relations with the NSA, the FBI, and the Department of Homeland Security.
The United States Department of Justice is continuing its doomed quest for an encryption system that simultaneously conceals texts from some people who should not have access to them and reveals them to other people who should not have access to them. They have begun to organize research teams and conferences to discuss ways of forcing or tricking people who want strong encryption into accepting weak encryption instead.
The new feature of this story is that some of the researchers who have gone over to the dark side are now identified by name: Ray Ozzie, formerly Chief Technical Officer and Chief Software Architect for (of course) Microsoft Corporation; Stefan Savage, Irwin and Joan Jacobs Chair in Information and Computer Science at the University of California, San Diego; and Ernie Brickell, Chief Security Architect, Intel Corporation.
The presence of Brickell and Ozzie guarantee that users should never trust encryption systems supplied in Intel hardware or as part of the Windows operating system, but should continue to use systems, such as
GPG, that are entirely implemented in open-source software.
“Justice Dept. Revives Push to Mandate a Way to Unlock Phones”
Charlie Savage, The New York Times, March 25, 2018