Glider from the game of Life, rising from the left

Unity

Archives

Blogroll

Topic: #black-box-deciders

Economic Models That Posit Simple Causal Explanations Predict Poorly

2018-05-23⊺07:25:04-05:00

“Economic Predictions with Big Data: The Illusion of Sparsity”
Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri, Federal Reserve Bank of New York, April 2018
https://www.newyorkfed.org/medialibrary/media/research/staff_reports/sr847.pdf

The tl;dr version:

“Economic Predictions with Big Data: The Illusion of Sparsity”
Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri, Liberty Street Economics, May 21, 2018
http://libertystreeteconomics.newyorkfed.org/2018/05/economic-predictions-with-big-data-the-illusion-of-sparsity.html

Seeking to explain why predictive economic models perform so poorly when applied to cases outside of their training set, the authors generate and study a large number of variant models for six economic phenomena (two in macroeconomics, two in microeconomics, and two in finance). Some of these models are sparse, in the sense that they posit that their predictions should depend on a small number of variables in the input data (the ones with the greatest predictive power) others are dense, allowing for dependence on many input variables.

Dense models are prone to overfitting. To prevent this, the training process identifies variables for which the training set provides only weak information and constrains their weights to be small so that their contributions to the models' predictions are limited (but usually nonzero).

The predictions of sparse models are easier to interpret because they generate simpler causal explanations. In dense models, it often turns out that very many factors contribute to the prediction so that the causal explanations are muddled and vary more from one instance to another.

The authors found that most of the economic phenomena that they tried to model actually have complex causal explanations, which is why the sparse models that economists have traditionally favored don't yield accurate predictions.

#economics #models #black-box-deciders

An Ethics Checklist for Black-Box Deciders

2018-05-07⊺16:51:40-05:00

An attempt to identify and explain the ethical preconditions for replacing social policies with algorithmic models. It's incomplete, but the questions that are included are relevant and salient, and the cautionary tales and links are thought-provoking.

“Math Can't Solve Everything: Questions We Need to Be Asking Before Deciding an Algorithm Is the Answer”
Jamie Williams and Lena Gunn, Deeplinks, Electronic Frontier Foundation, May 7, 2018
https://www.eff.org/deeplinks/2018/05/math-cant-solve-everything-questions-we-need-be-asking-deciding-algorithm-answer

#black-box-deciders #ethics-in-daily-life #algorithms

Three Attempted Defenses against Adversarial Examples

2018-05-04⊺11:50:03-05:00

The authors provide summary descriptions of three proposed defensive strategies for training black-box deciders that block attempts to find adversarial examples: “adversarial training” (including adversarial examples in training sets), “defensive distillation” (“smoothing the model's decision surface” in the hope of eliminating the abrupt discontinuities that adversarial examples exploit), and “gradient masking” (flattening the gradients in the vicinity of a successfully classified training object so that it is computationally difficult for the software that finds adversarial examples to explore that space productively).

The authors' assessment is that the first two methods are whack-a-mole games in which new adversarial examples just pop up in previously unexplored parts of the input space, while the third doesn't work at all. It papers over exploitable weaknesses, but would-be attackers can simply develop their own models, not using gradient masking, and find the adversarial examples against those models. They will be equally effective against the gradient-masked decider.

“Is Attacking Machine Learning Easier Than Defending It?”
Ian Goodfellow and Nicolas Papernot, cleverhans-blog, February 15, 2017
http://www.cleverhans.io/security/privacy/ml/2017/02/15/why-attacking-machine-learning-is-easier-than-defending-it.html

The authors conclude with some possible explanations of the easy availability, robustness, and persistence of adversarial examples:

Adversarial examples are hard to defend against because it is hard to construct a theoretical model of the adversarial example crafting process. Adversarial examples are solutions to an optimization problem that is non-linear and non-convex for many ML models, including neural networks. Because we don't have good theoretical tools for describing the solutions to these complicated optimization problems, it is very hard to make any kind of theoretical argument that a defense will rule out a set of adversarial examples.

From another point of view, adversarial examples are hard to defend against because they require machine learning models to produce good outputs for every possible input. Most of the time, machine learning models work very well but only work on a very small amount of all the may possible inputs they might encounter.

The authors don't mention one explanation that I find particularly plausible. A trained deep neural network maps each possible input to a decision or a classification. The space of possible inputs is typically immense. I imagine the decision function that the network implements as carving this input space into regions, each region containing the inputs that will be classified in the same way. (The regions may or may not be simply connected; that doesn't matter.) Instead of cleanly separating the space into blocks with easily described shapes, the regions have extremely irregular boundaries that curl around one another and thread through one another and break one another up in complicated ways. The number of dimensions of the space is huge, so that the Euclidean distance between an arbitrarily chosen point inside one region and the nearest point that is inside some other (arbitrarily chosen) region is likely to be small, since there are so many directions in which to search. Evolution drives animal brains to make decisions and classifications that are not only mostly accurate but also intelligible (and energy-efficient). Training a deep neural network doesn't impose these additional constraints and so yields network configurations that implement decision functions that are much more likely to carve up their input spaces in these irremediably intricate ways.

#adversarial-examples #neural-networks #black-box-deciders

An Overview of Research on Adversarial Examples

2018-05-04⊺10:58:49-05:00

A snapshot of the state of research on adversarial examples at the time of publication (February 2017). It's partial, but there are a lot of links that look useful.

“Attacking Machine Learning with Adversarial Examples”
Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel, and Jack Clark, OpenAI, February 24, 2017
https://blog.openai.com/adversarial-example-research/

#adversarial-examples #machine-learning #black-box-deciders

Complex AI Decision-Making through Debates

2018-05-04⊺10:42:16-05:00

In many contexts, it would be foolish to trust software decision systems that cannot explain or justify their decisions. However, the structure of neural networks seems to preclude explanations that use concepts and categories that are sufficiently high-level to be intelligible to human beings.

One approach to making black-box deciders more trustworthy is to make their training sets less noisy, so that they more accurately reflect the actual goals and interests of the human users of the system. In complex decision-making, one difficulty is that human beings are not very accurate in assessing complex situations and determining what the “right” solution should be.

Some anonymous researchers at Open AI propose to provide human trainers with two AI assistants, one to find the best justification for a decision and the other to challenge and rebut that justification. Before pronouncing on each training example, the human trainer listens to a debate between these two systems and decides which of them is right. The theory is that the AIs can dumb down their descriptions of the situation to the point where even a human being can judge it accurately.

“AI Safety via Debate”
Geoffrey Irving and Dario Amodei
OpenAI, May 3, 2018
https://blog.openai.com/debate/

One approach to aligning AI experts with human goals and preferences is to ask humans at training time which behaviors are safe and useful. While promising, this method requires humans to recognize good or bad behavior; in many situations an agent's behavior may be too complex for a human to understand, or the task itself may be hard to judge or demonstrate. Examples include environments with very large, non-visual observation spaces — for instance, an agent that acts in a computer security-related environment, or an agent that coordinates a large set of industrial robots.

How can we augment humans so that they can effectively supervise advanced AI systems? One way is to take advantage of the AI itself to help with the supervision, asking the AI (or a separate AI) to point out flaws in any proposed action. To achieve this, we reframe the learning problem as a game played between two agents, where the agents have an argument with each other and the human judges the exchange. Even if the agents have a more advanced understanding of the problem that the human, the human may be able to judge which agent has the better argument (similar to expert witnesses arguing to convince a jury). …

There are some fundamental limitations to the debate model that may require it to be improved or augmented with other methods. Debate does not attempt to address issues like adversarial examples or distributional shift — it is a way to get a training signal for complex goals, not a way to guarantee robustness of such goals.

It's sad that the designers and advocates of this method automatically frame it as a possible way to overcome some of the deficiencies of human trainers rather than as a way of overcoming the opacity and inexplicability of deep neural networks, which is one of the fundamental flaws of black-box deciders and a key reason that they can't be trusted in cases where explicability is crucial. To my mind, a debate between opposing AIs would be even more useful during testing and in real-world use than at the training stage. Such a debate might expose a valid high-level rationale for accepting or rejecting the output of a black-box decider, and also might reveal that no such rationale exists.

#black-box-deciders #artificial-intelligence #machine-learning

Facebook and the Problem of Free Will

2018-04-13⊺22:36:34-05:00

OK, just one more post about Facebook, and then I'm swearing off for at least two weeks.

One of the problems with knowledge claims about future events is that the causal chains that lead to those events often include decisions that people haven't made yet, decisions that in turn depend on the outcomes of contingent events that haven't yet occurred. Facebook is offering a new product that gets around this epistemological difficulty by waving crystalline neural networks at it.

“Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document”
Sam Biddle, The Intercept, April 13, 2018
https://theintercept.com/2018/04/13/facebook-advertising-data-artificial-intelligence-ai/

Instead of merely offering advertisers the ability to target people based on demographics and consumer preferences, Facebook instead offers the ability to target them based on how they will behave, what they will buy, and what they will think. These capabilities are the fruits of a self-improving, artificial intelligence-powered prediction engine, first unveiled by Facebook in 2016 and dubbed “FBLearner Flow.”

One slide in the document touts Facebook's ability to “predict future behavior,” allowing companies to target people on the basis of decisions they haven't even made yet. This would, potentially, give third parties the opportunity to alter a consumer's anticipated course. …

[Law professor Frank Pasquale] told The Intercept that Facebook's behavioral prediction work is “eerie” and worried how the company could turn algorithmic predictions into “self-fulfilling prophecies,” since “once they've made this prediction they have a financial interest in making it true.” That is, once Facebook tells an advertising partner you're going to do some thing or other next month, the onus is on Facebook to either make that event come to pass, or show that they were able to help effectively prevent it (how Facebook can verify to a marketer that it was indeed able to change the future is unclear).

Of course, such a prediction system can't operate transparently. If there is any way for targets to become aware of the predictions that are made about their future behavior, the predictions themselves enter the causal chain that result in the future decisions, thus undermining the basis for the predictions. To take the simplest and most extreme case, what happens if a Facebook user resolves to do the opposite of whatever FBLearner Flow predicts?

It occurs to me that the perfect use for this tool would be to predict which companies' advertising managers are gullible enough to be deceived by this hokum and which ones will decide to spend their advertising budgets in less carnivalesque ways. Then Facebook could perhaps develop a slicker pitch to alter the anticipated course of the second group of marks.

#Facebook #black-box-deciders #prediction-systems

Explainable AI versus Justifiable AI

2018-03-13⊺15:30:36-05:00

In some circumstances, it is ill-advised, even dangerous, to rely on black-box deciders, because they cannot explain their decisions. But in some circumstances it is also ill-advised, even dangerous, to rely on AI decision systems that do explain their decisions, because their explanations are inevitably phony, simplistic, misguided, or out of touch with reality. A weaker criterion of adequacy based on experience in dealing with unreliable decision systems such as imperfect human beings may be more suitable.

“Justifiable AI”
Carlos Bueno, Ribbonfarm, March 13, 2018
https://www.ribbonfarm.com/2018/03/13/justifiable-ai/

There are many efforts to design AIs that can explain their reasoning. I suspect they are not going to work out. We have a hard enough time explaining the implications of regular science, and the stuff we call AI is basically pre-scientific. There's little theory or causation, only correlation. We truly don't know how they work. And yet we can't anthropomorphizing the damn things. Expecting a glorified syllogism to stand up on its hind legs and explain its corner cases is laughable. …

Asking for “just so” narrative explanations from AI is not going to work. Testimony is a preliterate tradition with well-known failure modes even within our own species. Think about it this way: do you really want to unleash these things on the task of optimizing for convincing excuses?

AI that can be grasped intuitively would be a good thing, if for no other reason than to help us build better ones. … But the real issue is not that AIs must be explainable, but justifiable.

#artificial-intelligence #black-box-deciders #trust

It Only Takes One Pixel

2018-03-12⊺14:54:16-05:00

Changing only one (carefully selected) pixel in an image can cause black-box deciders to misclassify the image in an astonishing number of cases. The authors develop a method that makes fewer assumptions about the mechanics of the deciders that it is trying to fool than other methods of constructing adversarial examples.

“One Pixel Attack for Fooling Deep Neural Networks”
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai, arXiv, February 22, 2018
https://arxiv.org/pdf/1710.08864.pdf

#black-box-deciders #adversarial-examples #neural-networks

Serious Adversaries

2018-03-07⊺16:39:31-06:00

“Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains”
Tegjyot Singh Sethi and Mehmed Kantardzic, arXiv, March 23, 2017
https://arxiv.org/pdf/1703.07909.pdf

Machine learning operates under the assumption of stationarity, i.e. the training and testing distributions are assumed to be identically and independently distributed … . This assumption is often violated in an adversarial setting, as adversaries gain nothing by generating samples which are blocked by a defender's system. …

In an adversarial environment, the accuracy of classification has little significance, if an attacker can easily evade detection by intelligently perturbing the input samples.

Most of the paper deals with strategies for probing black-box deciders that are only accessible as services, through APIs or Web interfaces. The justification for the strategies is more heuristic than theoretical, but the authors give some evidence that they are good enough to generate adversarial examples for a lot of real-world black-box deciders.

What I liked most about the paper was the application of “the security mindset.”

#adversarial-examples #security-mindset #black-box-deciders

Functions Implemented by Neural Networks are Discontinuous

2018-03-07⊺15:52:21-06:00

This is the paper that introduced the term “adversarial examples” and initiated the systematic study and construction of such examples.

“Intriguing Properties of Neural Networks”
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus, arXiv, February 19, 2014
https://arxiv.org/pdf/1312.6199.pdf

When carefully selected adversarial inputs are presented to image classifiers implemented as deep neural networks and trained using machine-learning techniques, the classifiers fail badly, because the functions from inputs to outputs that they implement are highly discontinuous. Such adversarial inputs are not difficult to generate and are not dependent on particular training data or learning regimens, since the same adversarial examples are misclassified by image classifiers trained on different data under different learning regimens.

The authors present a second, possibly related discovery: Even the pseudo-neurons (“units”) that are close to the output layer usually don't carry semantic information individually; the contributions of linear combinations of such units are indistinguishable in nature from the contributions of the units individually, so any semantic information that is present emerges holistically from the entire layer.

These results suggest that the deep neural networks that are learned by backpropagation have nonintuitive characteristics and intrinsic blind spots, whose structure is connected to the data distribution in a non-obvious way. …

If the network can generalize well, how can it be confused by these adversarial negatives, which are indistinguishable from the regular examples? Possible explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the natural numbers), and so it is found near … virtually every test case.

#adversarial-examples #black-box-deciders #neural-networks

Sheep in Unusual Places

2018-03-02⊺13:57:33-06:00

Black-box classifiers are prone to hasty generalizations from training data. If you train your neural network on a lot of pictures depicting sheep on grassy hills, the network learns to posit sheep whenever it sees a grassy hill.

“Do Neural Nets Dream of Electric Sheep?”
Janelle Shane, Postcards from the Frontiers of Science, March 2, 2018
http://aiweirdness.com/post/171451900302/do-neural-nets-dream-of-electric-sheep

Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities.

Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs.

Paint them orange, and they become flowers.

On the other hand, as Shane observes, the Microsoft Azure classifier is hypervigilant about giraffes (“due to a rumored overabundance of giraffes in the original dataset”).

#black-box-deciders #neural-networks #adversarial-examples

Neural Networks as Function Simulators

2018-02-28⊺12:02:27-06:00

Some of the limitations of black-box deciders become more obvious and intuitive when one recognizes the machine-learning algorithms behind them as software tools for approximating functions, using large data sets and statistical tools for calibration.

“The Delusions of Neural Networks”
Giacomo Tesio, Medium, January 18, 2018
https://medium.com/@giacomo_59737/the-delusions-of-neural-networks-f7085d47edb6

The key points:

(A) Neural networks simulate functions and are calibrated statistically, with the assistance of large data sets comprising known argument-value pairs.

(B) Like other simulations, neural networks sometimes yield erroneous or divergent results. The functions they actually compute are usually not mathematically equal to the functions they simulate.

(C) The reason for this is that the calibration process uses only a finite number of argument-value pairs, whereas the function that the neural network is designed to simulate computes values for infinitely many arguments, or at least for many, many more arguments than are used in the calibration. (Otherwise, the simulation would be useless.) The data used in the calibration are compatible with many, many more functions than the one that the neural network is designed to simulate. The probability that calibrating the neural network results in its computing a function that is mathematically equal to the one it is designed to simulate is negligible — for practical purposes, it is zero.

(D) The problem of determining how accurately a neural network simulates the function it is designed to simulate is undecidable. There is no general algorithm to answer questions of this form. As a result, neural networks are usually validated not by proving their correctness but by empirical measurement: We apply them to arguments not used in the calibration process and compare the values they compute to the values that the functions they are designed to simulate associate with the same test arguments. When they match in a large enough percentage of cases, we pronounce the simulation a success.

(E) However, these test arguments are not generated at random, but are drawn from the same “natural” sources as the data used in the calibration of the network. The success of the simulation depends on this bias: Unless the test arguments are sufficiently similar to the data used in the calibration, the probability that the computed values will match is again negligible. This would essentially never happen if the test arguments were randomly selected.

(F) Consequently, the process of validating a neural network does not prove that it is unbiassed. On the contrary: in order to be pronounced valid, a neural network must simulate the biasses of the data set used in the calibration.

(G) In principle, it would be possible for independent judges to confirm that the data set is free from forms of bias that constitute discrimination against some protected class of persons and to provide strong empirical evidence that the function actually computed by the neural network does not actually introduce such a bias. In practice, this confirmation process would be prohibitively expensive and time-consuming.

(H) Neural networks can be used, and often are used, to simulate unknown functions. In those cases, there would be no way for a panel of independent judges even to begin the process of confirming freedom from discriminatory bias, because no one even knows whether the function that the neural network is designed to simulate exemplifies such a bias.

#black-box-deciders #neural-networks #simulation

The Opacity of Black-Box Metrics

2018-02-16⊺16:13:24-06:00

This week, I've been reading The Tyranny of Metrics, a new book by the historian Jerry Z. Muller of the Catholic University of America. One of the themes of the book is that metrics lose their reliability when they are transparently tied to rewards. For example, a hospital might decide to give bonuses to surgeons whose operations have a higher rate of success, as measured by the percentage of those operations after which the patient survives for at least thirty days. The idea is to improve the overall quality and performance of surgical operations in the hospital by motivating surgeons to do better work. In practice, however, what often happens is that surgeons refuse to take on high-risk patients or arrange for their patients' post-op caretakers to use heroic measures to keep them alive for at least thirty-one days. The metrics award higher scores to the surgeons who successfully game the system, and they receive their bonuses but the overall quality and performance of surgical operations do not, in fact, increase as a result. The metric has lost any reliability it once had as a measure of overall quality and performance.

It occurs to me that, as black-box deciders take over the job of assessing the performance of workers and deciding which of them should receive bonuses, the opacity of the decision systems may block this loss of reliability, by making it much more difficult, perhaps impossible, for the workers to game the system. If there is no explanation for the black-box decider's assessments, there is no way for the workers to infer that any particular tactic will change those assessments in their favor.

Of course, this also means that there is no way for managers to devise rational policies for improving the work of their staff. Because the black-box deciders are opaque and their judgements inexplicable and unaccountable, there is no way to distinguish policy changes that will have positive results (as assessed by the black-box decider) from those that will have negative results.

#black-box-deciders #metrics #opacity

Diminishing Returns from Deep Learning

2018-02-12⊺11:00:39-06:00

An overview of the recent achievements, acknowledged limitations, and plausible extensions of multi-level neural networks suggests that this approach to artificial intelligence is nearly played out and must be supplemented by alternative approaches in order to make further progress.

In section 3, the author identifies ten “limits on the scope of deep learning,” including some that I would consider critical and ineradicable (see section 3.5, “Deep Learning Thus Far Is Not Sufficiently Transparent,” and section 3.9, “Deep Learning Thus Far Works Well as an Approximation, But Its Answers Often Cannot Be Fully Trusted”).

“Deep Learning: A Critical Appraisal”
Gary Marcus, arXiv, January 2018
https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf

The transparency issue, as yet unsolved, is a potential liability when using deep learning for problem domains like financial trades or medical diagnosis, in which human users might like to understand how a given system made a given decision. … Such opacity can also lead to serious issues of bias.

None of Marcus's proposals for supplementing machine learning addresses either the transparency problem or the problem posed by adversarial examples.

#machine-learning #black-box-deciders #neural-networks

YouTube Gone Wild

2018-02-03⊺08:51:58-06:00

In the absence of explicit guidance from the user, the black-box decider inside YouTube that chooses and queues up the videos that it fancies you'll be most interested in seeing next tends to make recommendations that are progressively more bizarre and disturbing. Perhaps it has learned something about human nature, but more likely its selections are the video-recommender analogue of the luridly colored fantasy images that a black-box classifier constructs when directed to search for the pixel pattern that maximizes its response to a given search term such as “octopus” or “mouth” or “waterfall”.

This writer suspects that the black-box decider's behavior reflects something sinister in its programming. It turns out that many of the bizarre and disturbing videos that the decider eventually queues up, not too surprisingly, are pro-Trump ads and right-wing loons promoting conspiracy theories.

“‘Fiction Is Outperforming Reality’: How YouTube's Algorithm Distorts Truth”
Paul Lewis, The Guardian, February 2, 2018
https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth

#black-box-deciders #YouTube #recommendation-systems

Applying Black Box Deciders to Surveillance Data

2018-01-27⊺09:19:11-06:00

Software tools for searching immense quantities of surveillance data are increasingly relying on black-box deciders to extract and summarize search results.

“Artificial Intelligence Is Going to Supercharge Surveillance”
James Vincent, The Verge, January 23, 2018
https://www.theverge.com/2018/1/23/16907238/artificial-intelligence-surveillance-cameras-security

For experts in surveillance and AI, the introduction of these sorts of capabilities is fraught with potential difficulties, both technical and ethical. And, as is often the case in AI, these two categories are intertwined. It's a technical problem that machines can't understand the world as well as humans do, but it becomes an ethical one when we assume the can and let them make decisions for us. …

Even if we manage to fix the biases in these automated systems, that doesn't make them benign, says ACLU policy analyst Jay Stanley. He says that changing CCTV cameras from passive into active observers could have a huge chilling effect on civil society.

“We want people to not just be free, but to feel free. And that means that they don't have to worry about how an unknown, unseen audience may be interpreting or misinterpreting their every movement and utterance,” says Stanley. “The concern is that people will begin to monitor themselves constantly, worrying that everything they do will be misinterpreted and bring down negative consequences on their life.”

#surveillance #black-box-deciders #chilling-effects

Hashtag index

This work is licensed under a Creative Commons Attribution-ShareAlike License.

Atom feed

John David Stone (havgl@unity.homelinux.net)

created June 1, 2014 · last revised December 10, 2018