A researcher in artificial intelligence has collected some leading indicators of a decline in the use of deep-learning systems and in the irrational exuberance of potential consumers.
“AI Winter Is Well on Its Way”
Filip Piękniewski, Piekniewski's Blog, May 29, 2018
“Deep Learning: A Critical Appraisal”
Gary Marcus, arXiv, January 2018
In thinking about algorithms, it is essential to have good notations for expressing them, for conveying the ideas of the algorithms from one human mind to another clearly, unambiguously, and without unnecessary psychological strain or trauma. We also want our expressions of those algorithms to be executable, and we even care a little bit about making the work of translating from our programming notations into sequences of machine instructions efficient and straightforward, but that's of secondary importance. Well-designed programming languages put the interests of the human beings who have to write and read programs ahead of the interests of the creators of compilers and interpreters.
However, not many programming languages are well-designed. Most language creators are preoccupied with their implementations and with some particular feature set that they can implement extraordinarily efficiently. They don't give much thought to design and are oblivious to the issues that a skillful designer of notations takes into account. Nor do they pay much attention to the history of programming languages, to the lessons that long and painful experience should already have taught us or to the brilliant insights of designers of languages that are now obsolete and forgotten.
We now have immense code libraries that should never be used (or, worse, re-used) because it's too difficult to debug them, to adapt them, to prove their correctness, or to estimate their resource use, even asymptotically and in general terms. Careless programming is the principal cause of this catastrophe, but I allocate some of the blame to poorly designed programming languages, which make it almost impossible to express algorithms accurately and intelligibly.
In the early years of C++, its creator, Bjarne Stroustrup, built the language up from C primarily by accretion. Initially, C++ was implemented as a preprocessor that produced standard C, and the first novel elements of the C++ language were the features that introduced object orientation: classes, methods, and inheritance. The preprocessor converted these into standard C typedefs and functions.
Once the project got rolling, though, Stroustrup started adding other features, accommodating first requests from users and then suggestions from other language proponents. If they seemed like good ideas that could be easily implemented, they went into the language, making C++ more powerful and, in some ways, more expressive, but also more difficult to work with and to understand. Stroustrup didn't give enough thought to the interactions among these new features or to their effects on the ability of programmers, particularly new programmers, to write and read C++ code. By the time C++ was first standardized, the language standard was six times as long as the one that defined C and full of bizarre corner cases, unspecified and undefined behaviors, and opportunities for misinterpretation by implementers seeking short cuts. The design of the language was a mess, and it has been a mess ever since.
C++ has, of course, continued to evolve, and ingenious contributors have constantly proposed extensions, improvements, and features of all kinds. However, Stroustrup now opposes most of these suggestions, even when he thinks that they might be good ideas in principle, because he knows that the language is already too large, too complicated, too hard to learn, and too hard to use effectively as a way of expressing algorithms. Most of the proposals would make it even worse in these respects.
Here are two of his contributions (to the C++17 and C++20 working groups, respectively) making this point:
“Thoughts about C++17”
Bjarne Stroustrup, May 15, 2015
It seems to be a popular pastime to condemn C++ for being a filthy mess caused by rampant design-by-committee. This has been suggested repeatedly since before the committee was founded, but I feel the situation is now far worse. C++ is larger now (especially when we consider the standard library). That, and the variety of current proposals make the accusation credible.
“Remember the Vasa!”
Bjarne Stroustrup, March 6, 2018
We are on a path to disaster through enthusiasm and design-by-committee …
C++17 did little to make our foundation more solid, regular, and complete. Instead, it added significant surface complexity and increased the number of features people need to learn. C++ could crumble under the weight of these — mostly not quite fully-baked — proposals. We should not spend most [of] our time creating increasingly complicated facilities for experts, such as ourselves.
We need a reasonably coherent language that can be used by “ordinary programmers” whose main concern is to ship great applications on time. We now have about 150 cooks; that's not a good way to get a tasty and balanced meal.
We are on the path to something that could destroy C++. We must get off that path!
Stroustrup's repentance probably came too late to save C++, but perhaps the next generation of language designers can learn from his tragedy.
Amazon is now claiming that the mishap reported here, in which an Echo recorded a random chunk of household conversation and e-mailed it to a third party, resulted from a cascade of four misinterpretations of elements of the conversation: Echo misheard something as a wake word, something else as a "send message" request, something else again as the recipient's name, and yet another thing as a confirmation.
“Amazon Explains How Alexa Recorded a Private Conversation and Sent It to Another User”
Tom Warren, The Verge, May 24, 2018
Each Echo maintains a log of its operations, and one tech-savvy user decided to look through this log to find out how often the device wakes itself up “accidentally.” The answer turns out to be “several times a day, for no obvious reason.”
“Yes, Alexa Is Recording Mundane Details of Your Life, and It's Creepy as Hell”
Rachel Metz, MIT Technology Review, May 25, 2018
I started wondering: what is it picking up on at my house when we're not talking to it directly?
So I checked my Alexa history (you can do that through the “settings” portion of the Amazon Alexa smartphone app) to see what kinds of things it recorded without my knowledge.
That's when the hairs on the back of my neck started to stand up. …
It's heard me complain to my dad about something work-related, chide my toddler about eating dinner, and talk to my husband — the kinds of normal, everyday things you say at home when you think no one else is listening. …
I invited Alexa into our living room to make it easier to listen to Pandora and occasionally check the weather, not to keep a log of intimate family details or record my kid saying “Mommy, we going car” and forward it to Amazon's cloud storage.
My guess is that the sampling is not really accidental, but reflects Amazon's desire to collect additional data about its customers. I suppose that the primary goal is to improve the Echo's voice recognition by getting a large enough data set for the machine-learning techniques to work a little more reliably. On the other hand, Amazon has many other possible uses for such a collection. The fact that the Echo sometimes mishears something as its wake word provides a convenient cover story.
An Amazon Echo “accidentally” recorded a couple's private conversation in their home and e-mailed the recording to one of the husband's employees. Amazon investigated and found an explanation that supposedly satisfied the company engineers, but did not divulge that explanation either to the couple or to the general public, instead asserting that they had “determined this to be an extremely rare occurrence” and that “Amazon takes privacy very seriously.”
“Woman Says Her Amazon Device Recorded Private Conversation, Sent It Out to Random Contact”
Gary Horcher, KIRO-TV, May 24, 2018
“IDEA — Nonverbal Algorithm Assembly Instructions”
Sándor P. Fekete, Sebastian Moor, and Sebastian Stiller, IDEA, May 24, 2018
We have now reached the point at which it is foolish to register at most corporate Web sites, not just because they will send spam to the e-mail address you provide, but also because registration implies acceptance of the site's terms of service.
“Registering for Things on the Internet Is Dangerous These Days”
Chris Siebenmann, Chris's Wiki, May 24, 2018
In the old days, terms of service were not all that dangerous and often existed only to cover the legal rears of the service you were registering with. Today, this is very much not the case … Most ToSes will have you agreeing that the service can mine as much data from you as possible and sell it to whoever it wants. Beyond that, many ToSes contain additional nasty provisions like forced arbitration, perpetual broad copyright licensing for whatever you let them get their hands on (including eg your profile picture), and so on. …
The corollary to this is that you should assume that anyone who requires registration before giving you access to things when this is not actively required by how their service works is trying to exploit you. For example, “register to see this report” should be at least a yellow and perhaps a red warning sign. My reaction is generally that I probably don't really need to read it after all.
“Economic Predictions with Big Data: The Illusion of Sparsity”
Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri, Federal Reserve Bank of New York, April 2018
The tl;dr version:
“Economic Predictions with Big Data: The Illusion of Sparsity”
Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri, Liberty Street Economics, May 21, 2018
Seeking to explain why predictive economic models perform so poorly when applied to cases outside of their training set, the authors generate and study a large number of variant models for six economic phenomena (two in macroeconomics, two in microeconomics, and two in finance). Some of these models are sparse, in the sense that they posit that their predictions should depend on a small number of variables in the input data (the ones with the greatest predictive power) others are dense, allowing for dependence on many input variables.
Dense models are prone to overfitting. To prevent this, the training process identifies variables for which the training set provides only weak information and constrains their weights to be small so that their contributions to the models' predictions are limited (but usually nonzero).
The predictions of sparse models are easier to interpret because they generate simpler causal explanations. In dense models, it often turns out that very many factors contribute to the prediction so that the causal explanations are muddled and vary more from one instance to another.
The authors found that most of the economic phenomena that they tried to model actually have complex causal explanations, which is why the sparse models that economists have traditionally favored don't yield accurate predictions.
“Speculative Execution, Variant 4: Speculative Store Bypass”
Jann Horn, Monorail, Project Zero, February 6, 2018
“Side-Channel Vulnerability Variants 3a and 4”
United States Computer Emergency Readiness Team, May 22, 2018
“Spectre Chip Security Vulnerability Strikes Again; Patches Incoming”
Steven J. Vaughn-Nichols, Zero Day, May 22, 2018
A professional software developer describes how he came to write software that helped the United States Army kill people. His first-person account is followed by a few similar anecdotes from other developers and observers and concludes with some lessons about how to avoid killing people with your software.
“Don't Get Distracted”
Caleb Thompson, November 16, 2017
The project owner conveniently left out its purpose when explaining the goals. I conveniently didn't focus too much on that part. It was great pay for me at the time. It was a great project. Maybe I just didn't want to know what it would be used for. I got distracted.
“An O(N) Sorting Algorithm: Machine Learning Sorting”
Hanqing Zhao and Yueban Luo, arXiv, May 11, 2018
The authors propose a new method for sorting a gigantic array of arbitrary values in linear time: Select a fixed number (say 1000) values from the array and sort them. Using these values as a training set, train a three-layer neural network to estimate the position in the sorted array that any given value will occupy. Set up an array of buckets equal in size to the original array. Feed each value in the array into the neural network and put it in the bucket corresponding to the network's prediction of the value's position in the sorted array. A linear-time amount of post-processing can now ensure that every value is in a bucket that is within a fixed distance of its position in the sorted array. Apply insertion sort on the almost-sorted values in the array of buckets to build the actual sorted array. Since insertion sort runs in linear time on almost-sorted arrays, the whole process, including the training of the neural network, takes linear time.
I wouldn't have thought of that one.
Next month in arXiv: Adversarial sorting examples.
Stock photos of models portraying professionals reflect the stereotypes, misconceptions, or, um, imaginative design concepts of art directors.
“People Are Sharing Hilariously Bad Stock Photos of Their Jobs”
“Ilona”, BoredPanda, May 15, 2018
Some security researchers have discovered a new attack on PGP. They have written a paper explaining how it works and plan to publish it tomorrow, but the Electronic Frontier Foundation has learned enough about it that they are sounding an alarm even before the details are public:
“Attention PGP Users: New Vulnerabilities Require You to Take Action Now”
Danny O'Brien and Gennie Gebhart, Deeplinks, Electronic Frontier Foundation, May 13, 2018
A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with this research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages. …
Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.
The story includes links to instructions provided by the EFF on how to temporarily disable the PGP plug-ins for Thunderbird, Apple Mail, and Outlook.Update (2018-05-14⊺11:34:32-05:00)
The discoverers of the attack now have a Web site up and have published a draft of their paper there:
“Efail: Breaking S/MIME and OpenPGP Email Encryption Using Exfiltration Channels”
Damian Poddebniak, Christian Dresen, Jens Miller, Fabian Ising, Sebastian Schinzel, Simon Friedberger, Juraj Somorovsky, and Jörg Schwenk, May 14, 2018
There are actually two vulnerabilities. One exploits peculiarities, arguably errors, in mail user agents that parse and interpret HTML in messages after they have been decrypted. The other exploits a weakness in the OpenPGP standard: Under certain circumstances, the standard doesn't require integrity checks and doesn't specify what a decryption algorithm should do when an integrity check fails. Consequently, many mail user agents do the wrong thing when they receive a message that has been tampered with.
The Electronic Frontier Foundation has a follow-up, and other security authorities are providing quick analysis as well:
“Not So Pretty: What You Need to Know about E-Fail and the PGP Flaw”
Erica Portnoy, Danny O'Brien, and Nate Cardozo, Deeplinks, Electronic Frontier Foundation, May 14, 2018
“Some Notes on eFail”
Robert Graham, Errata Security, May 14, 2018
“New Vulnerabilities in Many PGP and S/MIME Enabled Email Clients”
Matthew Green, Twitter, May 14, 2018
“As If Nuremberg Never Happened”
Peter van Buren, The American Conservative, March 19, 2018
Nothing will say more about who we are, across three American administrations — one that demanded torture, one that covered it up, and one that seeks to promote its bloody participants — than whether Gina Haspel becomes director of the CIA. …
Gina Haspel is now eligible for the CIA directorship because Barack Obama did not prosecute anyone for torture; he merely signed an executive order banning it in the future. He did not hold any truth commissions, and ensured that almost all government documents on the torture program remain classified. He did not prosecute the CIA officials who destroyed videotapes of the torture scenes. …
Unless Congress awakens to confront this nightmare and deny Gina Haspel's nomination as director of the CIA, torture will have transformed us and so it will consume us. Gina Haspel is a torturer. We are torturers. It is as if Nuremberg never happened.
“objecthub,” GitHub, May 5, 2018
LispKit is a framework for building Lisp-based extension and scripting languages for macOS applications. LispKit is fully written in the programming language Swift. LispKit implements a core language based on the R7RS (small) Scheme standard. It is extensible, allowing the inclusion of new native libraries written in Swift, of new libraries written in Scheme, as well as custom modifications of the core environment consisting of a compiler, a virtual machine as well as the core libraries.
It's free software, under the Apache 2.0 license.
This seems to be a kind of Mac OS analogue of GNU Guile.
An attempt to identify and explain the ethical preconditions for replacing social policies with algorithmic models. It's incomplete, but the questions that are included are relevant and salient, and the cautionary tales and links are thought-provoking.
“Math Can't Solve Everything: Questions We Need to Be Asking Before Deciding an Algorithm Is the Answer”
Jamie Williams and Lena Gunn, Deeplinks, Electronic Frontier Foundation, May 7, 2018
A surprising amount of research in artificial intelligence, and particularly in the field of machine learning, is being carried out by people who don't understand what they are doing, and yielding software that behaves in ways that are impossible to explain or understand. As a result, much of the work is difficult or impossible to reproduce or confirm.
“AI Researchers Allege that Machine Learning Is Alchemy”
Matthew Hutson, Science, May 3, 2018
Many years ago, the Church of Scientology created a one-act play featuring a conversation between an intrepid newspaper reporter and a disgruntled ex-Scientologist. The character of the ex-Scientologist was based on a real person, who had written a debunking paper that the church wished to discredit. The intrepid newspaper reporter was Lois Lane of the Daily Planet, and the play also features her co-workers Clark Kent and Jimmy Olsen.
An agent of the Federal Bureau of Investigation obtained a draft of this play and added it to the FBI's extensive collection of documents relating to Scientology. Last year, an investigative journalist submitted a request for those documents to the FBI under the Freedom of Information Act, and the FBI has gradually released a few of them, including the drama.
The FBI redacted the names of Lois Lane and Clark Kent, citing privacy concerns.
“The FBI Redacted the Names of DC Comic Book Characters to Protect Their Non-Existent Privacy”
Dell Cameron, Gizmodo, April 30, 2018
“Kryptonians are entitled to just as much privacy as other Americans.”
“Drive-By Rowhammer Attack Uses GPU to Compromise an Android Phone”
Dan Goodin, Ars Technica, May 3, 2018
At least eight new variants of the Spectre vulnerability have been discovered and will be surfacing soon. One was discovered by Google's Project Zero team, which notoriously publishes the vulnerabilities they discover after ninety days, regardless of whether patches have been found. For that one, time's up on Monday, May 7.
Some of the vulnerabilities are more consequential or more easily exploited than others. One is reported to cause a serious problem for host systems running virtual machines: Malware running on a VM can break into the host or into other VMs on the same host.
“Exclusive: Spectre-NG — Multiple New Intel CPU Flaws Revealed, Several Serious”
Jürgen Schmidt, c't, May 3, 2018
One of the Spectre-NG flaws simplifies attacks across system boundaries to such an extent that we estimate the threat potential to be significantly higher than with Spectre. Specifically, an attacker could launch exploit code in a virtual machine (VM) and attack the host system from there — the server of a cloud hoster, for example. Alternatively, it could attack the VMs of other customers running on the same server. Passwords and secret keys for secure data transmission are highly sought-after targets on cloud systems and are acutely endangered by this gap. Intel's Software Guard Extensions (SGX), which are designed to protect sensitive data on cloud servers, are also not Spectre-safe.
Although attacks on other VMs or the host system were already possible in principle with Spectre, the real-world implementation required so much prior knowledge that it was extremely difficult. However, the aforementioned Spectre-NG vulnerability can be exploited quite easily for attacks across system boundaries, elevating the threat potential to a new level. Cloud service providers such as Amazon or Cloudflare and, of course, their customers are particularly affected.
The authors provide summary descriptions of three proposed defensive strategies for training black-box deciders that block attempts to find adversarial examples: “adversarial training” (including adversarial examples in training sets), “defensive distillation” (“smoothing the model's decision surface” in the hope of eliminating the abrupt discontinuities that adversarial examples exploit), and “gradient masking” (flattening the gradients in the vicinity of a successfully classified training object so that it is computationally difficult for the software that finds adversarial examples to explore that space productively).
The authors' assessment is that the first two methods are whack-a-mole games in which new adversarial examples just pop up in previously unexplored parts of the input space, while the third doesn't work at all. It papers over exploitable weaknesses, but would-be attackers can simply develop their own models, not using gradient masking, and find the adversarial examples against those models. They will be equally effective against the gradient-masked decider.
“Is Attacking Machine Learning Easier Than Defending It?”
Ian Goodfellow and Nicolas Papernot, cleverhans-blog, February 15, 2017
The authors conclude with some possible explanations of the easy availability, robustness, and persistence of adversarial examples:
Adversarial examples are hard to defend against because it is hard to construct a theoretical model of the adversarial example crafting process. Adversarial examples are solutions to an optimization problem that is non-linear and non-convex for many ML models, including neural networks. Because we don't have good theoretical tools for describing the solutions to these complicated optimization problems, it is very hard to make any kind of theoretical argument that a defense will rule out a set of adversarial examples.
From another point of view, adversarial examples are hard to defend against because they require machine learning models to produce good outputs for every possible input. Most of the time, machine learning models work very well but only work on a very small amount of all the may possible inputs they might encounter.
The authors don't mention one explanation that I find particularly plausible. A trained deep neural network maps each possible input to a decision or a classification. The space of possible inputs is typically immense. I imagine the decision function that the network implements as carving this input space into regions, each region containing the inputs that will be classified in the same way. (The regions may or may not be simply connected; that doesn't matter.) Instead of cleanly separating the space into blocks with easily described shapes, the regions have extremely irregular boundaries that curl around one another and thread through one another and break one another up in complicated ways. The number of dimensions of the space is huge, so that the Euclidean distance between an arbitrarily chosen point inside one region and the nearest point that is inside some other (arbitrarily chosen) region is likely to be small, since there are so many directions in which to search. Evolution drives animal brains to make decisions and classifications that are not only mostly accurate but also intelligible (and energy-efficient). Training a deep neural network doesn't impose these additional constraints and so yields network configurations that implement decision functions that are much more likely to carve up their input spaces in these irremediably intricate ways.
A snapshot of the state of research on adversarial examples at the time of publication (February 2017). It's partial, but there are a lot of links that look useful.
“Attacking Machine Learning with Adversarial Examples”
Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel, and Jack Clark, OpenAI, February 24, 2017
In many contexts, it would be foolish to trust software decision systems that cannot explain or justify their decisions. However, the structure of neural networks seems to preclude explanations that use concepts and categories that are sufficiently high-level to be intelligible to human beings.
One approach to making black-box deciders more trustworthy is to make their training sets less noisy, so that they more accurately reflect the actual goals and interests of the human users of the system. In complex decision-making, one difficulty is that human beings are not very accurate in assessing complex situations and determining what the “right” solution should be.
Some anonymous researchers at Open AI propose to provide human trainers with two AI assistants, one to find the best justification for a decision and the other to challenge and rebut that justification. Before pronouncing on each training example, the human trainer listens to a debate between these two systems and decides which of them is right. The theory is that the AIs can dumb down their descriptions of the situation to the point where even a human being can judge it accurately.
“AI Safety via Debate”
Geoffrey Irving and Dario Amodei
OpenAI, May 3, 2018
One approach to aligning AI experts with human goals and preferences is to ask humans at training time which behaviors are safe and useful. While promising, this method requires humans to recognize good or bad behavior; in many situations an agent's behavior may be too complex for a human to understand, or the task itself may be hard to judge or demonstrate. Examples include environments with very large, non-visual observation spaces — for instance, an agent that acts in a computer security-related environment, or an agent that coordinates a large set of industrial robots.
How can we augment humans so that they can effectively supervise advanced AI systems? One way is to take advantage of the AI itself to help with the supervision, asking the AI (or a separate AI) to point out flaws in any proposed action. To achieve this, we reframe the learning problem as a game played between two agents, where the agents have an argument with each other and the human judges the exchange. Even if the agents have a more advanced understanding of the problem that the human, the human may be able to judge which agent has the better argument (similar to expert witnesses arguing to convince a jury). …
There are some fundamental limitations to the debate model that may require it to be improved or augmented with other methods. Debate does not attempt to address issues like adversarial examples or distributional shift — it is a way to get a training signal for complex goals, not a way to guarantee robustness of such goals.
It's sad that the designers and advocates of this method automatically frame it as a possible way to overcome some of the deficiencies of human trainers rather than as a way of overcoming the opacity and inexplicability of deep neural networks, which is one of the fundamental flaws of black-box deciders and a key reason that they can't be trusted in cases where explicability is crucial. To my mind, a debate between opposing AIs would be even more useful during testing and in real-world use than at the training stage. Such a debate might expose a valid high-level rationale for accepting or rejecting the output of a black-box decider, and also might reveal that no such rationale exists.