“Ten Years Later, Cory Doctorow's Little Brother Remains Inevitable”
Cory Doctorow, Tor.com, April 26, 2018
We only know how to make one computer (the computer that runs every program) and one internet (the internet that carries any data), and we specifically don't know how to make computers that can run all the programs except for the one that freaks you out … and we don't know how to make an internet that carries all messages except the ones you don't like. …
This is a reality that policymakers, law-enforcement, and the general public [have] spectacularly failed to come to grips with. …
Computers create real problems: harassment, commercial surveillance, state surveillance, corporate malfeasance, malware attacks on embedded systems, and casino tricks to “maximize engagement” at the expense of pleasure and satisfaction. … We can't solve those problems by engaging with computers as we want them to be — only by engaging with them as they truly are.
When the Department of Homeland Security or the Federal Bureau of Investigation wants access to some information that was originally obtained by an intelligence service using some unconstitutional mass-surveillance technique, it sometimes seeks a veneer of legal protection for its actions by making a “collection request” to the Foreign Intelligence Surveillance Court, acting as a “Court of Review.”
Usually the FISC rubber-stamps these applications. (This is hardly surprising, since the proceedings of the FISC are not adversarial — no one is present to represent the interests of the victims of surveillance or to point out violations of the Fourth Amendment or of international law.) Sometimes the court makes a few comments, asks the applicant to rewrite the request, and then rubber-stamps the results.
Since last year, however, when Congress renewed and revised the law authorizing these ludicrous procedures, it created an arrangement under which the court can, if it chooses, invite a “friend of the court” to comment on the proposals it is reviewing, and in particular to deal with any novel or significant question of law that those proposals raise. Typically the job is given to former Department of Justice officials who can be relied on not to introduce undesirable innovations into the cozy arrangements between the FISC and the state-security apparatus.
Evidently the possibility of exposing their requests to the skeptical gaze of these third parties has succeeded in spooking the applicants. Last year, there were three occasions on which the FISC saw fit to invite in a friend of the court, and in each case the government agency that made the application chose to withdraw it instead of allowing anyone else to see it.
Marcy Wheeler speculates on the motives for these withdrawals:
“In 2017, the Government Withdrew Three FISA Collection Requests Rather Than Face an Amicus Review”
“emptywheel”, emptywheel, April 26, 2017
That the government has been withdrawing requests rather than submitting them to the scrutiny of an amicus suggests several things.
First, it may be withdrawing such applications out of reluctance to share details of such techniques even with a cleared amicus, not even one of the three who served as very senior DOJ officials in the past. If that's right, that would reflect some pretty exotic requests …
Second, … past history has shown that the government often finds another way to get information denied by the FISC, and that may have happened with these three requests.
Finally, remember that as part of 702 reauthorization last year, [Senator] Ron Wyden warned that reauthorization should include language preventing the government from demanding that companies provide technical assistance. … Some of these withdrawn requests … may reflect such onerous technical requests.
A former Chief Technical Officer at Microsoft has proposed a solution to the FBI's supposed “going dark” problem — the use of secure encryption tools by criminals and terrorists. His solution is to provide a cryptographic back door that would be locked with a private key that the phone manufacturer would maintain and guard with the same fanatically obsessive security with which Apple or Microsoft guards its own update-signing keys.
“Cracking the Crypto War”
Steven Levy, Wired, April 25, 2018
Say the FBI needs the contents of an iPhone. First the feds have to actually get the device and the proper court authorization to access the information it contains — Ozzie's system does not allow the authorities to remotely snatch information. With the phone in its possession, they could then access, through the lock screen, the encrypted PIN and send it to Apple. Armed with that information, Apple would send highly trusted employees into the vault where they could use the private key to unlock the PIN. Apple could ten send that no-longer-secret PIN back to the government, who can use it to unlock the device.
Robert Graham points out several vulnerabilities in this scheme:
“No, Ray Ozzie Hasn't Solved Crypto Backdoors”
Robert Graham, Errata Security, April 25, 2018
The vault doesn't scale
… The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack. …
Cryptography is about people more than math
… How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?
You think these things are theoretical, but they aren't. …
Locked phones aren't the problem
Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption [a] crime, then criminals will still use encryption.
That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.
In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them. …
The FBI isn't necessarily the problem
… Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably by used by repressive states for what we believe would be “illegitimate” law enforcement requests.
Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn't see himself [as] what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living [under] oppressive governments around the world.
Ozzie pretends the problem is political, that he's created a solution that appeases both sides. He hasn't. He's solved the problem we already know how to solve. He's ignored all the problems we struggle with, the problems that we claim make secure backdoors essentially impossible. I've listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he's going to have to work harder to appease doubting cryptographers.
Matthew Green makes a similar case, even more persuasively (because his rhetoric is less heated until he reaches the climax of his argument).
“A Few Thoughts on Ray Ozzie's ‘Clear’ Proposal”
Matthew Green, A Few Thoughts on Cryptographic Engineering, April 26, 2018
The richest and most sophisticated phone manufacturer in the entire world tried to build a processor that achieved goals similar to those Ozzie requires. And as of April 2018, after five years of trying, they have been unable to achieve this goal — a goal that is critical to the security of the Ozzie proposal as I understand it. …
The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs [Hardware Security Modules] are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line [in] an environment where we believe with high probability that the system will fail.
There's a pretty good consensus among security and legal researchers with a variety of political perspectives. Here are some more examples:
“Ray Ozzie's Proposal: Not a Step Forward”
Steven M. Bellovin, SMBlog, April 25, 2018
“Building on Sand Isn't Stable: Correcting a Misunderstanding of the National Academies Report on Encryption”
Susan Landau, Lawfare, April 25, 2018
An exceptional-access system is not merely a complex mathematical design for a cryptosystem; it is a systems design for a complex engineering task. … An exceptional-access system would have to operate in real time, authenticate multiple law-enforcement agencies … ensure the accuracy of the authentication system and its ability to withstand attacks, and handle frequent updates to hardware, the operating system, phones, and more. The exceptional-access system would have to be flexible enough to handle the varied architectures of different types of phones, security systems and update processes. …
The fundamental difference between building a sound cryptosystem and a secure exceptional-access system is the difference between solving a hard mathematics problem … and producing a sound engineering solution to a difficult systems problem with constantly changing parts and highly active adversaries.
“Ray Ozzie's Key-Escrow Proposal Does Not Solve the Encryption Debate — It Makes It Worse”
Riana Pfefferkorn, Center for Internet and Society, April 26, 2018
The access mechanism Ozzie proposes would act as a kind of tamper-evident seal that, once “broken” by law enforcement, renders the phone unusable. Ozzie touts this as a security feature for the user, not a bug. …
This “feature” alone should consign Ozzie's idea to the rubbish heap of history. … Ozzie's scheme would basically require a self-destruct function in every smartphone sold, anywhere his proposal became law, that would be invoked thousands and thousands of times per year, with no compensation to the owners. That proposal does not deserve to be taken seriously — not in a democracy, anyway. …
It would give the police a way to lean on you to open [your] phone for them. “You can make it easy on yourself by unlocking the phone and giving it to us,” they might say. “But hey, we don't need you. We can go to a judge and get a warrant, and then we can just have Apple unlock it for us. … That would brick it forever, so you couldn't use your phone anymore, even after we gave it back to you eventually. You'd have to go out and buy a new one.”
Security researchers have discovered a way to hack a widely used model of electronic keys, adopted by more than forty-two thousand hotels in one hundred and sixty-six countries.
The researchers reported the vulnerability to the manufacturer about a year ago, and earlier this year the manufacturer provided customers with patches for the central server software. The firmware on each lock also needs to be upgraded by someone who is physically present at the lock. There's no way to determine how many of the locks have received the upgrade.
“Hackers Built a ‘Master Key’ for Millions of Hotel Rooms”
Zach Whittaker, Zero Day, April 25, 2018
“The Lebowski Theorem”
Joscha Bach, Twitter, April 14, 2018
The Lebowski Theorem: No superintelligent AI is going to bother with a task that is harder than hacking its own reward function.
What does “personal style” mean when the Echo Look gives you fashion advice, and can tell you what you like but not why you like it?
“Style Is an Algorithm”
Kyle Chayka, Racked, April 17, 2018
If you know the source of the suggestion, then you might give it a chance and see if it meshes with your tastes. In contrast, we know the machine doesn't care about us, nor does it have a cultivated taste of its own; it only wants us to engage with something it calculates we might like. This is boring. …
We can decide to become a little more analog. I imagine a future in which our clothes, music, film, art, books come with stickers like organic farmstand produce: Algorithm Free. …
“Echo” is a good name for Amazon's device because it creates an algorithmic feedback loop in which nothing original emerges.
Alexa, how do I look?
You look derivative, Kyle.
In 2016, the Federal Bureau of Investigation felt so strongly that it needed to access the contents of a suspected terrorist's encrypted iPhone that it persuaded the Department of Justice to lean on Apple, threatening to prosecute under the All Writs Act of 1789 unless Apple agreed to develop a tool for breaking into encrypted iPhones and to provide it to the FBI. Apple declined, and eventually the FBI hired a company that had already developed such a tool to do the job for them, thus eliminating the threat against Apple. (The terrorist's iPhone contained nothing of interest.)
This episode struck people as sufficiently stupid and disgusting that the Department of Justice asked its Office of the Inspector General to prepare a report explaining exactly what happened and why. The report is now available (with redactions):
“A Special Inquiry Regarding the Accuracy of FBI Statements concerning Its Capabilities to Exploit an iPhone Seized during the San Francisco Terror Attack Investigation”
Oversight and Review Division, Office of the Inspector General, U.S. Department of Justice, March 2018
According to the report, one branch of the FBI, the Remote Operations Unit (ROU) of the Operational Technology Division, had already hired another outside company to develop a tool that would break into that iPhone, and this vendor successfully demonstrated the tool on March 16, 2016. However, the ROU didn't tell anyone else in the FBI about this accomplishment, and the separate branch of the FBI that was responsible for the investigation of the suspected terrorist never asked the ROU about it, partly, perhaps, because the FBI wanted to establish a legal precedent for bullying Apple and other tech companies into doing their work for them, but also because most of the stuff that the ROU develops is classified, and using classified tools to acquire key evidence in a criminal case is a generally a bad idea, since the discovery process can easily reveal the existence and nature of those tools.
In practice, the Department of Justice frequently uses classified tools to acquire key evidence in criminal cases because they can often get away with it, but it still isn't a good idea, and the FBI shouldn't promote it.
However, the Inspector General's report recommends that the various branches of the FBI shouldn't withhold information about hacking tools from one another and encourages the FBI to complete the reorganization that it has already begun “to consolidate resources to address the ‘Going Dark’ problem and improve coordination between the units that work on computer and mobile devices.”
The Cryptography Fellow at the Stanford Center for Internet and Society points out the foreseeable consequences:
“The Dark Side of the ‘Apple vs. FBI’ OIG Report”
Riana Pfefferkorn, Center for Internet and Society, April 18, 2018
If the OIG report prompts the FBI to give the CEAU [Cryptographic and Electronic Analysis Unit], which focuses on criminal matters, more access to tools developed or acquired by ROU, which focuses on national security matters, that could have a detrimental effect on federal criminal cases. When seeking search and seizure warrants, the FBI may not fully explain to judges that they are asking for authorization to use sophisticated, technological techniques to extract evidence from defendants' devices. In the resulting prosecutions, the government may refuse to disclose information about the classified technique, or even its existence, to defense counsel or experts. That secrecy will impair the court's truth-seeking function as well as the defendant's ability to mount a defense.
What is more, removing the divide between criminal and national security tools could ultimately hurt the FBI, too. If courts do order disclosure of the FBI's techniques in criminal cases, the FBI's national security and intelligence units might decide that they cannot risk using those techniques anymore. That is a significant reason why the wall was there in the first place: to protect those missions. …
It is ironic that the OIG report into the FBI's behavior during Apple vs. FBI may lead to the FBI's criminal investigators achieving that case's objective: getting more capabilities to crack into digital devices.
The ethical and prudential faults in this situation just go on and on: A company that discovers flaws in iPhone security has an ethical responsibility to report those flaws to Apple so that they can be fixed, instead of concealing the vulnerabilities and selling exploitation tools to other parties. The FBI certainly should not be hiring companies to produce such tools. If it does acquire such tools, the FBI also has an ethical responsibility to report the flaws to Apple instead of exploiting them. It also has an ethical responsibility to try to get them declassified before exploiting them, since a domestic law-enforcement organization does not need and should not have national-security clearances and should not rely on them in day-to-day operations if they do have them.
If the Remote Operations Unit does acquire and exploit classified system-cracking tools, it has a prudential obligation to make its resources available wherever they are needed within the agency and so should not conceal such tools from other branches of the FBI. But the CEAU should not use such tools in criminal investigations, for the reasons that Pfefferkorn explains: Doing so breaks the prosecution of such cases. Indeed, the Department of Justice should not even use evidence acquired through the use of classified system-cracking tools, precisely because judges should exclude such evidence and any inferences based on it.
Our institutions are so thoroughly shot through with unethical, unprofessional, and corrupt misbehavior that it is hard even to figure out where a reform project should begin.
A corporate portrait of Palantir, which collects and mines data about people.
“Palantir Knows Everything about You”
Peter Waldman, Lizette Chapman, and Jordan Robertson, Bloomberg, April 19, 2018
“‘Eternal Flaming Wheelbarrow Full of Cash’ Picked as Global War on Terror Memorial”
“Dirty”, The Duffel Blog, April 16, 2018
“Our veterans deserve a memorial that accurately captures the spirit of their war,” said Park Service spokesman Tim Taylor. “And I think we've really nailed it with this design.”
The approved design will incorporate elements of other famous memorials, most notably a gas-powered eternal flame similar to one at President John F. Kennedy's grave, in nearby Arlington National Cemetery. However, the GWOT memorial's eternal flame will burn piles of real U.S. currency to reflect the enormous expense of waging war against an abstract concept, and visiting dignitaries, instead of laying a wreath at the site, will be instructed to honor GWOT veterans by ceremoniously shoveling stacks of cash into the flames, officials said.
The tire of the wheelbarrow will be deflated, to reflect the American experience of becoming mired in an impossible position with no exit strategy or means of withdrawal. One handle of the wheelbarrow will be broken, to symbolize how unwieldy the campaign has been for the military leadership tasked with directing the war effort.
The wheelbarrow will also be adorned with a yellow ribbon bumper sticker, recalling the tremendous public support for the Global War on Terror, provided it didn't require any effort or personal sacrifice. …
The memorial, which will honor veterans of the wars in Iraq and Afghanistan, as well as veterans of U.S. intervention in Libya, Syria, Yemen, Somalia, Niger, Mali, Mauritania, Algeria, and [Redacted], will be placed just outside the National Mall, where it will likely attract public attention only when necessary or convenient for political points.
The security of computer systems, particularly those used in voting machines, is so inadequate that it would be prudent to switch over to paper ballots and paper voter-registration lists.
“American Elections Are Too Easy to Hack. We Must Take Action Now”
Bruce Schneier, The Guardian, April 18, 2018
For some reason, Schneier doesn't argue for switching over to paper for the process of collating, summing, and tabulating election results, although most of his observations about the need for an independently auditable paper trail apply to those steps in the election process as well.
A number of high-profile technology companies have recently composed and adopted a pledge to implement and improve defensive measures for computer and network security.
“Cybersecurity Tech Accord”
ABB, ARM, Avast, Bitdefender, BT, CA Technologies, Cisco, Cloudflare, DataStax, Dell, DocuSign, Facebook, Fastly, FireEye, F-Secure, GitHub, Guardtime, HP, Hewlett Packard Enterprise, Intuit, Juniper Networks, LinkedIn, Microsoft, Nielsen, Nokia, Oracle, RSA, SAP, Stripe, Symantec, Telefónica, Tenable, Trend Micro, and VMware, April 17, 2018
Malicious actors, with motives ranging from criminal to geopolitical, have inflicted economic harm, put human lives at risk, and undermined the trust that is essential to an open, free, and secure internet. Attacks on the availability, confidentiality, and integrity of data, products, services, and networks have demonstrated the need for constant vigilance, collective action, and a renewed commitment to cybersecurity.
Protecting our online environment is in everyone's interest. Therefore we — as enterprises that create and operate online technologies — promise to defend and advance its benefits for society. Moreover, we commit to act responsibly, to protect and empower our users and customers, and thereby to improve the security, stability, and resilience of cyberspace.
To this end, we are adopting this Accord and the principles below:
1. WE WILL PROTECT ALL OF OUR USERS AND CUSTOMERS EVERYWHERE. …
2. WE WILL OPPOSE CYBERATTACKS ON INNOCENT CITIZENS AND ENTERPRISES FROM ANYWHERE.
▪ We will protect against tampering with and exploitation of technology products and services during their development, design, distribution and use.
▪ We will not help governments launch cyberattacks against innocent citizens and enterprises from anywhere.
3. WE WILL HELP EMPOWER USERS, CUSTOMERS AND DEVELOPERS TO STRENGTHEN CYBERSECURITY PROTECTION. …
4. WE WILL PARTNER WITH EACH OTHER AND WITH LIKEMINDED GROUPS TO ENHANCE CYBERSECURITY.
Observing these principles will require some extreme adjustments in corporate culture and business models for some of these companies. I hope it happens, but I'm not expecting anything.
The Canadian province of Nova Scotia maintains a public database of government documents that have been released in response to freedom-of-information requests and have provided a Web interface to it. A nineteen-year-old Canadian student who was interested in learning about a labor dispute involving teachers in the province found some relevant files in that database but had difficulty searching for the ones he wanted. Since the Web pages for all of the documents had easily predictable URLs, he wrote a script to run through the URLs and download all of the documents, intending to go through them off line with better search tools.
It turns out that about two hundred fifty of the seven thousand documents in the database contained personally identifiable information that the provincial government had failed to remove before putting the documents on line.
Naturally, it's not the government that is in trouble as a result of this blunder. When the authorities discovered that the student had downloaded these published documents, they charged him with “unauthorized use of a computer.” He now faces up to ten years in prison.
He lives at home with his parents and younger siblings. The police staged a home invasion, tore up the house, confiscated the student's computers and gear, his father's work computers and cell phone, and his brother's computer, arrested his brother on the street, and detained and questioned his thirteen-year-old sister in a police car.
“Teen Charged in Nova Scotia Government Breach Says He Had ‘No Malicious Intent’”
Jack Julian, CBC News, April 16, 2018
Now that Google has learned to scrutinize Android apps, refusing to distribute most of the apps that contain malware through the Google Play store, makers of malware targeted at specific institutions and groups have learned to postpone their malware downloads until after the apps have been installed and configured. That way, Google doesn't get the opportunity to detect the malware beforehand, and the innocent-appearing app can acquire all the privileges it needs to download and activate the malware once the target's defenses are down.
“Fake Android Apps Used for Targeted Surveillance Found in Google Play”
Zack Whittaker, Zero Day, April 16, 2018
Dental-insurance companies are big fans of network-connected toothbrushes and will send them out as freebies — repeatedly and insistently.
“Our Dental Insurance Sent Us ‘Free’ Internet-Connected Toothbrushes. And This Is What Happened Next”
Wolf Richter, Wolf Street, April 14, 2018
The authors' family eventually figured out that you can use the toothbrush and even switch on the electricity so that the brush head vibrates automatically without activating the network connection, if you're careful to switch off Bluetooth in your phone before brushing your teeth. Now, however, they worry about the next step in the process:
We're expecting a series of emails that start out gently, and every two weeks or so get increasingly emphatic, telling us that we better start setting up the Internet connection to our toothbrushes and start sending our data to the cloud.
What's next? The day when we cannot get dental insurance without internet-connected toothbrushes. …
For now, our household is still able to at least partially block this intrusion. But there will be a day when we will be forced to surrender our data to get health insurance, drive a car, or have a refrigerator and a thermostat in the house. This is where this is going. Why? Because data is where the money is. And because many consumers are embracing it.
“Ex-FBI Director Comey in New Book Says Trump Is ‘Unethical and Untethered to Truth,’ Demanded Loyalty Like a Mafia Boss”
Associated Press, April 12, 2018
Well, I suppose it had to happen sooner or later. If you spend seventy years electing one corrupt war criminal after another to the presidency of the United States, eventually you're bound to wind up with one who is unethical.
OK, just one more post about Facebook, and then I'm swearing off for at least two weeks.
One of the problems with knowledge claims about future events is that the causal chains that lead to those events often include decisions that people haven't made yet, decisions that in turn depend on the outcomes of contingent events that haven't yet occurred. Facebook is offering a new product that gets around this epistemological difficulty by waving crystalline neural networks at it.
“Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document”
Sam Biddle, The Intercept, April 13, 2018
Instead of merely offering advertisers the ability to target people based on demographics and consumer preferences, Facebook instead offers the ability to target them based on how they will behave, what they will buy, and what they will think. These capabilities are the fruits of a self-improving, artificial intelligence-powered prediction engine, first unveiled by Facebook in 2016 and dubbed “FBLearner Flow.”
One slide in the document touts Facebook's ability to “predict future behavior,” allowing companies to target people on the basis of decisions they haven't even made yet. This would, potentially, give third parties the opportunity to alter a consumer's anticipated course. …
[Law professor Frank Pasquale] told The Intercept that Facebook's behavioral prediction work is “eerie” and worried how the company could turn algorithmic predictions into “self-fulfilling prophecies,” since “once they've made this prediction they have a financial interest in making it true.” That is, once Facebook tells an advertising partner you're going to do some thing or other next month, the onus is on Facebook to either make that event come to pass, or show that they were able to help effectively prevent it (how Facebook can verify to a marketer that it was indeed able to change the future is unclear).
Of course, such a prediction system can't operate transparently. If there is any way for targets to become aware of the predictions that are made about their future behavior, the predictions themselves enter the causal chain that result in the future decisions, thus undermining the basis for the predictions. To take the simplest and most extreme case, what happens if a Facebook user resolves to do the opposite of whatever FBLearner Flow predicts?
It occurs to me that the perfect use for this tool would be to predict which companies' advertising managers are gullible enough to be deceived by this hokum and which ones will decide to spend their advertising budgets in less carnivalesque ways. Then Facebook could perhaps develop a slicker pitch to alter the anticipated course of the second group of marks.
One down side to the emergence of user control of data collection and access as a political meme and substitute for reasoned argument is the likely countermove from the surveillance industry: conflating the user's right to privacy with the corporation's responsibility for confidentiality. Surveillance is unethical and irresponsible even when the corporation carefully manages third-party access to the dossiers it compiles.
“When the Business Model Is the Privacy Violation”
Arvind Narayanan, Freedom to Tinker, April 12, 2018
In other situations, the intended use is the privacy violation. The most prominent example is the tracking of our online and offline habits for targeted advertising. This business model is exactly what people object to, for a litany of reasons: targeting is creepy, manipulative, discriminatory, and reinforces harmful stereotypes. The data collection that enables targeted advertising involves an opaque infrastructure to which it's impossible to give meaningfully informed consent. …
In response to privacy laws, companies have tried to find technical measures that obfuscate the data but allow them [to] carry on with the surveillance business as usual. But that's just privacy theater. Technical steps that don't affect the business model are of limited effectiveness, because the business model is fundamentally at odds with privacy; this is in fact a zero-sum game. …
Privacy advocates should recognize that framing a concern about data use practices as a privacy problem is a double-edged sword. Privacy can be a convenient label for a set of related concerns, but it gives industry a way to deflect attention from deeper ethical questions by interpreting privacy narrowly as confidentiality.
“Every Positive Integer Is a Sum of Three Palindromes”
Javier Cilleruelo, Florian Luca, and Lewis Baxter, arXiv, June 17, 2017
The authors provide an algorithm for finding the palindromic addends but don't include any source code. The algorithm works for any integer base greater than or equal to 5.
The case that was supposed to determine whether the government can force Microsoft to turn over its users' data stored on servers in a foreign country is effectively over. Both sides have agreed that the case is moot now that the Clarifying Overseas Use of Data Act is law and the Department of Justice has procured a warrant under that law.
“What Will Microsoft And Ireland Do with the New CLOUD Act Warrant?”
Albert Gidari, Center for Internet and Society, April 9, 2018
The author raises several possible courses of action: It could try to quash the warrant somehow, or it could rely on the Irish government (possibly prompted by Microsoft) to insist that the United States work through the Mutual Legal Assistance Treaty that is supposed to ensure bilateral cooperation in such cases, or it could just roll over and give up the customer data.
My guess is that Microsoft will choose option C. It has already gotten what it wanted out of this lawsuit: a public-relations boost for its claim to protect users' data, some spiteful retaliation against the Department of Justice, and no real change in its close relations with the NSA, the FBI, and the Department of Homeland Security.
If you use the Internet or interact regularly with people who do, companies like Google and Facebook have compiled dossiers on you, regardless of whether you have ever set up accounts with them or used their services.
“Facebook Is Tracking Me Even Though I'm Not on Facebook”
Daniel Kahn Gillmor, Free Future, American Civil Liberties Union, April 5, 2018
Nearly every Website you visit that has a “Like” button is actually encouraging your browser to tell Facebook about your browsing habits. Even if you don't click on the “Like” button, displaying it requires your browser to send a request to Facebook's servers for the “Like” button itself. That request includes information mentioning the name of the page you are vising and any Facebook-specific cookies your browser might have collected. (See Facebook's own description of this process.) …
This makes it possible for Facebook to created a detailed picture of your browsing history — even if you've never even visited Facebook directly, let alone signed up for a Facebook account.
Think about most of the web pages you've visited — how many of them don't have a “Like” button? If you administer a website and you include a “Like” button on every page, you're helping Facebook to build profiles of your visitors, even those who have opted out of the social network. …
The profiles that Facebook builds on non-users don't necessarily include so-called “personally identifiable information” (PII) like names or email addresses. But they do include fairly unique patterns. Using Chromium's NetLog dumping, I performed a simple five-minute browsing test last week that included visits to various sites — but not Facebook. In that test, the PII-free data that was sent to Facebook included information about which news articles I was reading, my dietary preferences, and my hobbies.
Given the precision of this kind of mapping and targeting, “PII” isn't necessary to reveal my identity. How many vegans examine specifications for computer hardware from the ACLU's offices while reading about Cambridge Analytica? Anyway, if Facebook combined that information with the “web bug” from the email mentioned above — which is clearly linked to my name and e-mail address — no guesswork would be required.
If you want to target advertising effectively or persecute people for their political views or social status, building the dossiers at the vertices of the social graph is only the beginning. You can make many more reliable inferences if you identify and label the edges of the graph and study not only your target nodes' neighbors, but also their neighbors' neighbors.
“Stanford Researchers Find That Friends of Friends Reveal Hidden Online Traits”
Tom Abate, Stanford News, April 5, 2018
Researchers who have studied social media relationships have found that we tend to friend people of roughly our own age, race and political belief. … These traits are easily and accurately inferred from friendship studies. …
But not all unknown traits are easy to predict using friend studies. Gender, for instance, exhibits what researchers call weak homophily in online contexts. …
The group's new research shows that it's possible to infer certain concealed traits — gender being the first — by studying the friends of our friends.
“Chinese Government Forces Residents to Install Surveillance App with Awful Security”
Joseph Cox, Motherboard, April 9, 2018
JingWang scans for specific files stored on the device, including HTML, text, and images, by comparing the phone's contents to a list of MD5 hashes. …
JingWang also sends a device's phone number, device model, MAC address, unique IMEI number, and metadata of any files found in external storage that it deems dangerous to a remote server. …
As for handling that data, … JingWang exfiltrated data without any sort of encryption, instead transferring it all in plaintext. The app updates are not digitally signed either, meaning they could be swapped for something else without a device noticing.
“There Once Was a Singer of Old”
“worddevourer”, RNG+Decision Engine, January 2018
“Don't Worry about the Ethics of Self-Driving Cars”
Cathy O'Neil, Bloomberg View, April 6, 2018
The problem arises from the subtlety of most algorithmic failures. Nobody, especially not the people being assessed, will ever know exactly why they didn't get that job or that credit card. The code is proprietary. It's typically not well understood, even by the people who build it. There's no system of appeal and often no feedback to improve decision-making over time. The failures could be getting worse and we wouldn't know it.
A while ago, journalists were writing about how good Silicon Valley companies are with software and how surprisingly bad they are with hardware such as drones and spaceships. I think that's dead wrong. Not because startups have been building great delivery drones, but because there's absolutely no reason to think they're doing much better with software. We simply don't know how to look for their failures.
And here we have it:
While Zuckerberg claimed that major transparency efforts are on the company's horizon, he seemed dismissive of users' concerns about their privacy. The recent movement to #DeleteFacebook, he said, had “no meaningful impact” on the company or Facebook usage.
“Facebook knows so much about you,” he added, “because you chose to share it with your friends and put it on your profile.”
“Mark Zuckerberg: ‘It Was My Mistake’ Facebook Compromised Data of 87 Million Users”
Sarah Emerson, Motherboard, April 4, 2018
Facebook's actions and policy changes are about tightening up their control over access to the dossiers that Facebook compiles, which are now the company's intellectual property and primary business asset. Facebook has zero interest in their users' so-called “concerns about their privacy” and is not impressed by the feeble attempts of a few rabble-rousers to impede the juggernaut.
Just to drive home the point, Facebook's Chief Technology Officer recently conceded in the company blog that “malicious actors” have acquired “most” Facebook users' profile information. (Naturally, he buries the lede in the seventh paragraph of the post.)
“An Update on Our Plans to Restrict Data Access on Facebook”
Mike Schroepfer, Facebook Newsroom, April 4, 2018
Until today, people could enter another person's phone number or email address into Facebook search to help find them. … However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we've seen, we believe most people on Facebook could have had their public profile scraped in this way.
“Your Own Devices Will Give the Next Cambridge Analytica Far More Power to Influence Your Vote”
Justin Hendrix and David Carroll, MIT Technology Review, April 2, 2018
Though it's not clear if Cambridge Analytica's behavioral profiling and microtargeting had any measurable effect on the 2016 US election, these technologies are advancing quickly — faster than academics can study their effects and certainly faster than policymakers can respond. The next generation of such firms will almost certainly deliver on the promise. …
In the next few years, … we'll see the convergence of multiple disciplines, including data mining, artificial intelligence, psychology, marketing, economics, and experiential design theory. These methods will combine with an exponential increase in the number of surveillance sensors we introduce into our homes and communities, from voice assistants to internet-of-things devices that track people as they move through the day. Our devices will get better at detecting facial expressions, interpreting speech, and analyzing psychological signals.
In other words, the machines will know us better tomorrow than they do today. They will certainly have the data. While a General Data Protection Regulation is about to take effect in the European Union, the US is headed in the opposite direction. Facebook may have clamped down on access to its data, but there is more information about citizens on the market than ever before … not to mention all the data sloshing around thanks to hacks and misuse.
The “exponential increase in the number of surveillance sensors we introduce into our homes and communities” is already well along. We're past the knee of the curve and climbing the shaft of the hockey stick. Soon the only constraints will be bandwidth and network congestion, as a trillion cameras, microphones, and sensors all try to deliver their data in real time to the marketers, propagandists, spies, and law-enforcement teams poised in eager expectation.
A security researcher who is also a Panera Bread customer and has a customer account at the Panera Web site discovered a vulnerability that allowed any account holder to download Panera's dossier about any other account holder. He immediately sent an e-mail to firstname.lastname@example.org, but it bounced. He looked up the company's chief of security, who ignored his Twitter, LinkedIn, and e-mail messages until the researcher found a third party to effect a proper introduction. At that point the chief of security explained that he thought the earlier messages had been either a hoax or an scammer's attempt to drum up business.
Once communication had been established, the researcher asked for and received the chief of security's PGP public key and sent him the encrypted version of a full report on the vulnerability. The chief of security did not reply to the researcher's repeated inquiries about whether he had received and successfully decrypted the report, but ultimately declared, “Thank you for the information we are working on a resolution.”
The researcher then checked every month or so to see whether the vulnerability had been fixed. It never was. After a few months, the researcher published the details and called in some prominent reporters in the field (notably Brian Krebs of Krebs on Security and Dissent Doe of DataBreaches.net). Krebs's article on the subject managed to elicit a reaction from Panera: They took their Web site down for a while and pretended to fix the problem, and published a press release saying that the breach affected about ten thousand customers. When it came back up, an investigative team at HoldSecurity (prompted by Krebs) found that the breach affected all forty-one million of Panera's account holders, and moreover that Panera had made the same kind of mistake in many other places on its Web site, leaking a lot more data about the company.
“No, Panera Bread Doesn't Take Security Seriously”
Dylan Houlihan, Medium, April 2, 2018
1. We could collectively afford to be more critical of companies when they issue reactionary statements to do damage control. We need to hold them to a higher standard of accountability. I honestly don't know what that looks like for the media, but there has to be a better way to do thorough, comprehensive reporting on this.
2. We need to collectively examine what the incentives are that enabled this to happen. I do not believe it was a singular failure with any particular employee. It's easy to point to certain individuals, but they do not end up in those positions unless that behavior is fundamentally compatible with the broader corporate culture and priorities.
3. If you are a security professional, please, I implore you, set up a basic page describing a non-threatening process for submitting security vulnerability disclosures. Make this process obviously distinct from the, “Hi I think my account is hacked” customer support process. Make sure this is immediately read by someone qualified and engaged to investigate those reports, both technically and practically speaking. You do not need to offer a bug bounty or a reward. Just offering a way to allow people to easily contact you with confidence would go a long way.
Now that Siri, Alexa, Cortana, and their friends are pretty well established as commonplace services in homes, apartments, and hotel rooms, and people have demonstrated their willingness to accept and rely on devices with always-on microphones and cameras, the companies that make them are sneaking more weasel words into their nominal commitments to user privacy.
The peg for this story is the reporter's discovery of patent applications, filed by Amazon and Google, for using the data generated by continuous monitoring of the always-on mikes to target advertising more accurately, to determine people's moods, to infer their state of health, to find out whether a child is up to some minor mischief (and generate an appropriate reprimand), and so on. As the reporter points out, companies often generate patent applications like these regardless of whether they have any intention of using the technology (and, indeed, regardless of whether the technology would actually work). On the other hand, such documents reveal how the big surveillance capitalism companies are thinking about the future of their products and express in a more genuine and sincere way the companies' attitudes towards the privacy of their users.
“Hey, Alexa, What Can You Hear? And What Will You Do with It?”
Sapna Maheshwari, The New York Times, March 31, 2018
An active user of Internet services decided to take advantage of offers by Google and Facebook to provide him with copies of his dossier. They were much more comprehensive and diverse in their sources than he expected. Not surprisingly, they included a lot of files, photographs, and e-mail messages that he “deleted,” including, for instance, his PGP private key.
His dossier at Google ran to 5.5 gigabytes, and the one that Facebook compiled was 600 megabytes.
“Are You Ready? Here Is All the Data Facebook and Google Have on You”
Dylan Curran, The Guardian, March 30, 2018