Glider from the game of Life, rising from the left

Unity

Archives

Blogroll

Topic: #artificial-intelligence

Deep-Learning Hype Is Evaporating

2018-05-30⊺10:36:03-05:00

A researcher in artificial intelligence has collected some leading indicators of a decline in the use of deep-learning systems and in the irrational exuberance of potential consumers.

“AI Winter Is Well on Its Way”
Filip Piękniewski, Piekniewski's Blog, May 29, 2018
https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/

Update (2018-05-30⊺10:54:47-05:00): Piękniewski cites this paper, which gives a much more specific and detailed account of the weakness and limitations of deep learning.

“Deep Learning: A Critical Appraisal”
Gary Marcus, arXiv, January 2018
https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf

#artificial-intelligence #deep-learning #hype

An Intelligibility Crisis in Machine Learning

2018-05-07⊺14:54:59-05:00

A surprising amount of research in artificial intelligence, and particularly in the field of machine learning, is being carried out by people who don't understand what they are doing, and yielding software that behaves in ways that are impossible to explain or understand. As a result, much of the work is difficult or impossible to reproduce or confirm.

“AI Researchers Allege that Machine Learning Is Alchemy”
Matthew Hutson, Science, May 3, 2018
https://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy

#artificial-intelligence #machine-learning #intelligibility-crisis

Complex AI Decision-Making through Debates

2018-05-04⊺10:42:16-05:00

In many contexts, it would be foolish to trust software decision systems that cannot explain or justify their decisions. However, the structure of neural networks seems to preclude explanations that use concepts and categories that are sufficiently high-level to be intelligible to human beings.

One approach to making black-box deciders more trustworthy is to make their training sets less noisy, so that they more accurately reflect the actual goals and interests of the human users of the system. In complex decision-making, one difficulty is that human beings are not very accurate in assessing complex situations and determining what the “right” solution should be.

Some anonymous researchers at Open AI propose to provide human trainers with two AI assistants, one to find the best justification for a decision and the other to challenge and rebut that justification. Before pronouncing on each training example, the human trainer listens to a debate between these two systems and decides which of them is right. The theory is that the AIs can dumb down their descriptions of the situation to the point where even a human being can judge it accurately.

“AI Safety via Debate”
Geoffrey Irving and Dario Amodei
OpenAI, May 3, 2018
https://blog.openai.com/debate/

One approach to aligning AI experts with human goals and preferences is to ask humans at training time which behaviors are safe and useful. While promising, this method requires humans to recognize good or bad behavior; in many situations an agent's behavior may be too complex for a human to understand, or the task itself may be hard to judge or demonstrate. Examples include environments with very large, non-visual observation spaces — for instance, an agent that acts in a computer security-related environment, or an agent that coordinates a large set of industrial robots.

How can we augment humans so that they can effectively supervise advanced AI systems? One way is to take advantage of the AI itself to help with the supervision, asking the AI (or a separate AI) to point out flaws in any proposed action. To achieve this, we reframe the learning problem as a game played between two agents, where the agents have an argument with each other and the human judges the exchange. Even if the agents have a more advanced understanding of the problem that the human, the human may be able to judge which agent has the better argument (similar to expert witnesses arguing to convince a jury). …

There are some fundamental limitations to the debate model that may require it to be improved or augmented with other methods. Debate does not attempt to address issues like adversarial examples or distributional shift — it is a way to get a training signal for complex goals, not a way to guarantee robustness of such goals.

It's sad that the designers and advocates of this method automatically frame it as a possible way to overcome some of the deficiencies of human trainers rather than as a way of overcoming the opacity and inexplicability of deep neural networks, which is one of the fundamental flaws of black-box deciders and a key reason that they can't be trusted in cases where explicability is crucial. To my mind, a debate between opposing AIs would be even more useful during testing and in real-world use than at the training stage. Such a debate might expose a valid high-level rationale for accepting or rejecting the output of a black-box decider, and also might reveal that no such rationale exists.

#black-box-deciders #artificial-intelligence #machine-learning

Extreme Data Compression in Decision Making

2018-03-13⊺16:17:07-05:00

A fable about the use of Big Data in human institutions.

“Hyperlogloglog”
Carlos Bueno, December 2016
http://carlos.bueno.org/2016/12/hyperlogloglog.html

The fundamental strategy for dealing with large amounts of data was compression. Huge streams of numbers were converted by various clever tricks into streams tiny enough for humans to handle, who then decided what to do. If you really think about it … the entire purpose of data-driven decision-making is to compress ungodly infinitudes of numbers down to a single bit of decision: yes or no. …

The Hyperlogloglog was the size of a small housepet and was modeled on the human brain. It was capable of handling unlimited amounts of input data via the simple technique of immediately throwing it away.

#data-compression #artificial-intelligence #humor

Explainable AI versus Justifiable AI

2018-03-13⊺15:30:36-05:00

In some circumstances, it is ill-advised, even dangerous, to rely on black-box deciders, because they cannot explain their decisions. But in some circumstances it is also ill-advised, even dangerous, to rely on AI decision systems that do explain their decisions, because their explanations are inevitably phony, simplistic, misguided, or out of touch with reality. A weaker criterion of adequacy based on experience in dealing with unreliable decision systems such as imperfect human beings may be more suitable.

“Justifiable AI”
Carlos Bueno, Ribbonfarm, March 13, 2018
https://www.ribbonfarm.com/2018/03/13/justifiable-ai/

There are many efforts to design AIs that can explain their reasoning. I suspect they are not going to work out. We have a hard enough time explaining the implications of regular science, and the stuff we call AI is basically pre-scientific. There's little theory or causation, only correlation. We truly don't know how they work. And yet we can't anthropomorphizing the damn things. Expecting a glorified syllogism to stand up on its hind legs and explain its corner cases is laughable. …

Asking for “just so” narrative explanations from AI is not going to work. Testimony is a preliterate tradition with well-known failure modes even within our own species. Think about it this way: do you really want to unleash these things on the task of optimizing for convincing excuses?

AI that can be grasped intuitively would be a good thing, if for no other reason than to help us build better ones. … But the real issue is not that AIs must be explainable, but justifiable.

#artificial-intelligence #black-box-deciders #trust

Paradigms of Artificial Intelligence Programming Is Now Free Software

2018-02-27⊺11:20:27-06:00

Peter Norvig has made his brilliant textbook on classical methods of artificial intelligence available at GitHub under the MIT license, along with all of the source code.

“Lisp Code for the Textbook ‘Paradigms of Artificial Intelligence Programming’”
Peter Norvig, GitHub, February 27, 2018
https://github.com/norvig/paip-lisp

Read and enjoy!

#artificial-intelligence #LISP #free-books

Hashtag index

This work is licensed under a Creative Commons Attribution-ShareAlike License.

Atom feed

John David Stone (havgl@unity.homelinux.net)

created June 1, 2014 · last revised December 10, 2018