Glider from the game of Life, rising from the left

Unity

Archives

Blogroll

Archive for May 2018

Deep-Learning Hype Is Evaporating

2018-05-30⊺10:36:03-05:00

A researcher in artificial intelligence has collected some leading indicators of a decline in the use of deep-learning systems and in the irrational exuberance of potential consumers.

“AI Winter Is Well on Its Way”
Filip Piękniewski, Piekniewski's Blog, May 29, 2018
https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/

Update (2018-05-30⊺10:54:47-05:00): Piękniewski cites this paper, which gives a much more specific and detailed account of the weakness and limitations of deep learning.

“Deep Learning: A Critical Appraisal”
Gary Marcus, arXiv, January 2018
https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf

#artificial-intelligence #deep-learning #hype

Design and Implementation of Programming Languages

2018-05-29⊺11:47:25-05:00

In thinking about algorithms, it is essential to have good notations for expressing them, for conveying the ideas of the algorithms from one human mind to another clearly, unambiguously, and without unnecessary psychological strain or trauma. We also want our expressions of those algorithms to be executable, and we even care a little bit about making the work of translating from our programming notations into sequences of machine instructions efficient and straightforward, but that's of secondary importance. Well-designed programming languages put the interests of the human beings who have to write and read programs ahead of the interests of the creators of compilers and interpreters.

However, not many programming languages are well-designed. Most language creators are preoccupied with their implementations and with some particular feature set that they can implement extraordinarily efficiently. They don't give much thought to design and are oblivious to the issues that a skillful designer of notations takes into account. Nor do they pay much attention to the history of programming languages, to the lessons that long and painful experience should already have taught us or to the brilliant insights of designers of languages that are now obsolete and forgotten.

We now have immense code libraries that should never be used (or, worse, re-used) because it's too difficult to debug them, to adapt them, to prove their correctness, or to estimate their resource use, even asymptotically and in general terms. Careless programming is the principal cause of this catastrophe, but I allocate some of the blame to poorly designed programming languages, which make it almost impossible to express algorithms accurately and intelligibly.

The malefactors who created the worst of these languages (Perl, PHP, and JavaScript, let's say, and their spiritual ancestor PL/I) are mostly unrepentant, but one of them has learned to regret the follies of his youth and has sought redemption by working on the ISO standards committees for the language he created, trying to forestall the well-intentioned efforts of the numerous innovators populating those committees to make the same mistakes that he himself made in the early years of the language and to compound and exacerbate the weaknesses that his erstwhile carelessness introduced in the first place.

In the early years of C++, its creator, Bjarne Stroustrup, built the language up from C primarily by accretion. Initially, C++ was implemented as a preprocessor that produced standard C, and the first novel elements of the C++ language were the features that introduced object orientation: classes, methods, and inheritance. The preprocessor converted these into standard C typedefs and functions.

Once the project got rolling, though, Stroustrup started adding other features, accommodating first requests from users and then suggestions from other language proponents. If they seemed like good ideas that could be easily implemented, they went into the language, making C++ more powerful and, in some ways, more expressive, but also more difficult to work with and to understand. Stroustrup didn't give enough thought to the interactions among these new features or to their effects on the ability of programmers, particularly new programmers, to write and read C++ code. By the time C++ was first standardized, the language standard was six times as long as the one that defined C and full of bizarre corner cases, unspecified and undefined behaviors, and opportunities for misinterpretation by implementers seeking short cuts. The design of the language was a mess, and it has been a mess ever since.

C++ has, of course, continued to evolve, and ingenious contributors have constantly proposed extensions, improvements, and features of all kinds. However, Stroustrup now opposes most of these suggestions, even when he thinks that they might be good ideas in principle, because he knows that the language is already too large, too complicated, too hard to learn, and too hard to use effectively as a way of expressing algorithms. Most of the proposals would make it even worse in these respects.

Here are two of his contributions (to the C++17 and C++20 working groups, respectively) making this point:

“Thoughts about C++17”
Bjarne Stroustrup, May 15, 2015
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4492.pdf

It seems to be a popular pastime to condemn C++ for being a filthy mess caused by rampant design-by-committee. This has been suggested repeatedly since before the committee was founded, but I feel the situation is now far worse. C++ is larger now (especially when we consider the standard library). That, and the variety of current proposals make the accusation credible.

“Remember the Vasa!”
Bjarne Stroustrup, March 6, 2018
http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0.pdf

We are on a path to disaster through enthusiasm and design-by-committee …

C++17 did little to make our foundation more solid, regular, and complete. Instead, it added significant surface complexity and increased the number of features people need to learn. C++ could crumble under the weight of these — mostly not quite fully-baked — proposals. We should not spend most [of] our time creating increasingly complicated facilities for experts, such as ourselves.

We need a reasonably coherent language that can be used by “ordinary programmers” whose main concern is to ship great applications on time. We now have about 150 cooks; that's not a good way to get a tasty and balanced meal.

We are on the path to something that could destroy C++. We must get off that path!

Stroustrup's repentance probably came too late to save C++, but perhaps the next generation of language designers can learn from his tragedy.

#C++ #Bjarne-Stroustrup #programming-language-design

The Amazon Echo Samples Household Activity

2018-05-27⊺08:53:49-05:00

Amazon is now claiming that the mishap reported here, in which an Echo recorded a random chunk of household conversation and e-mailed it to a third party, resulted from a cascade of four misinterpretations of elements of the conversation: Echo misheard something as a wake word, something else as a "send message" request, something else again as the recipient's name, and yet another thing as a confirmation.

“Amazon Explains How Alexa Recorded a Private Conversation and Sent It to Another User”
Tom Warren, The Verge, May 24, 2018
https://www.theverge.com/2018/5/24/17391898/amazon-alexa-private-conversation-recording-explanation

Each Echo maintains a log of its operations, and one tech-savvy user decided to look through this log to find out how often the device wakes itself up “accidentally.” The answer turns out to be “several times a day, for no obvious reason.”

“Yes, Alexa Is Recording Mundane Details of Your Life, and It's Creepy as Hell”
Rachel Metz, MIT Technology Review, May 25, 2018
https://www.technologyreview.com/s/611216/yes-alexa-is-recording-mundane-details-of-your-life-and-its-creepy-as-hell/

I started wondering: what is it picking up on at my house when we're not talking to it directly?

So I checked my Alexa history (you can do that through the “settings” portion of the Amazon Alexa smartphone app) to see what kinds of things it recorded without my knowledge.

That's when the hairs on the back of my neck started to stand up. …

It's heard me complain to my dad about something work-related, chide my toddler about eating dinner, and talk to my husband — the kinds of normal, everyday things you say at home when you think no one else is listening. …

I invited Alexa into our living room to make it easier to listen to Pandora and occasionally check the weather, not to keep a log of intimate family details or record my kid saying “Mommy, we going car” and forward it to Amazon's cloud storage.

My guess is that the sampling is not really accidental, but reflects Amazon's desire to collect additional data about its customers. I suppose that the primary goal is to improve the Echo's voice recognition by getting a large enough data set for the machine-learning techniques to work a little more reliably. On the other hand, Amazon has many other possible uses for such a collection. The fact that the Echo sometimes mishears something as its wake word provides a convenient cover story.

#home-surveillance #Amazon-Echo #false-positives

Network-Connected Always-On Microphone Surprises Portland Couple for Some Reason

2018-05-24⊺22:15:47-05:00

An Amazon Echo “accidentally” recorded a couple's private conversation in their home and e-mailed the recording to one of the husband's employees. Amazon investigated and found an explanation that supposedly satisfied the company engineers, but did not divulge that explanation either to the couple or to the general public, instead asserting that they had “determined this to be an extremely rare occurrence” and that “Amazon takes privacy very seriously.”

Uh huh.

“Woman Says Her Amazon Device Recorded Private Conversation, Sent It Out to Random Contact”
Gary Horcher, KIRO-TV, May 24, 2018
https://www.kiro7.com/news/local/woman-says-her-amazon-device-recorded-private-conversation-sent-it-out-to-random-contact/755507974

#home-surveillance #Amazon-Echo #Internet-of-Things

How Graphical Programming Languages Look to Me

2018-05-24⊺09:44:12-05:00

“IDEA — Nonverbal Algorithm Assembly Instructions”
Sándor P. Fekete, Sebastian Moor, and Sebastian Stiller, IDEA, May 24, 2018
https://idea-instructions.com

#algorithms #programming-languages #humor

Registering Acknowledges Terms of Service

2018-05-24⊺09:08:29-05:00

We have now reached the point at which it is foolish to register at most corporate Web sites, not just because they will send spam to the e-mail address you provide, but also because registration implies acceptance of the site's terms of service.

“Registering for Things on the Internet Is Dangerous These Days”
Chris Siebenmann, Chris's Wiki, May 24, 2018
https://utcc.utoronto.ca/~cks/space/blog/tech/DangerousRegistration

In the old days, terms of service were not all that dangerous and often existed only to cover the legal rears of the service you were registering with. Today, this is very much not the case … Most ToSes will have you agreeing that the service can mine as much data from you as possible and sell it to whoever it wants. Beyond that, many ToSes contain additional nasty provisions like forced arbitration, perpetual broad copyright licensing for whatever you let them get their hands on (including eg your profile picture), and so on. …

The corollary to this is that you should assume that anyone who requires registration before giving you access to things when this is not actively required by how their service works is trying to exploit you. For example, “register to see this report” should be at least a yellow and perhaps a red warning sign. My reaction is generally that I probably don't really need to read it after all.

#data-mining #spam #decline-and-fall

Economic Models That Posit Simple Causal Explanations Predict Poorly

2018-05-23⊺07:25:04-05:00

“Economic Predictions with Big Data: The Illusion of Sparsity”
Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri, Federal Reserve Bank of New York, April 2018
https://www.newyorkfed.org/medialibrary/media/research/staff_reports/sr847.pdf

The tl;dr version:

“Economic Predictions with Big Data: The Illusion of Sparsity”
Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri, Liberty Street Economics, May 21, 2018
http://libertystreeteconomics.newyorkfed.org/2018/05/economic-predictions-with-big-data-the-illusion-of-sparsity.html

Seeking to explain why predictive economic models perform so poorly when applied to cases outside of their training set, the authors generate and study a large number of variant models for six economic phenomena (two in macroeconomics, two in microeconomics, and two in finance). Some of these models are sparse, in the sense that they posit that their predictions should depend on a small number of variables in the input data (the ones with the greatest predictive power) others are dense, allowing for dependence on many input variables.

Dense models are prone to overfitting. To prevent this, the training process identifies variables for which the training set provides only weak information and constrains their weights to be small so that their contributions to the models' predictions are limited (but usually nonzero).

The predictions of sparse models are easier to interpret because they generate simpler causal explanations. In dense models, it often turns out that very many factors contribute to the prediction so that the causal explanations are muddled and vary more from one instance to another.

The authors found that most of the economic phenomena that they tried to model actually have complex causal explanations, which is why the sparse models that economists have traditionally favored don't yield accurate predictions.

#economics #models #black-box-deciders

Still More Spectre Variants

2018-05-22⊺11:23:52-05:00

Almost all processors speculatively pre-execute a load instruction when they anticipate that any store instructions that precede it will not affect the contents of the memory location from which the value is loaded. The pre-execution is cancelled and discarded if this condition turns out to be false. Like other kinds of speculative execution, this one turns out to have side effects that can be detected and exploited by attackers to exfiltrate data from memory locations to which they should not have access.

“Speculative Execution, Variant 4: Speculative Store Bypass”
Jann Horn, Monorail, Project Zero, February 6, 2018
https://bugs.chromium.org/p/project-zero/issues/detail?id=1528

“Side-Channel Vulnerability Variants 3a and 4”
United States Computer Emergency Readiness Team, May 22, 2018
https://www.us-cert.gov/ncas/alerts/TA18-141A

“Spectre Chip Security Vulnerability Strikes Again; Patches Incoming”
Steven J. Vaughn-Nichols, Zero Day, May 22, 2018
https://www.zdnet.com/article/spectre-chip-security-vulnerability-strikes-again-patches-incoming

#spectre #hardware-design #security

How Unethical Software Gets Written

2018-05-16⊺10:27:13-05:00

A professional software developer describes how he came to write software that helped the United States Army kill people. His first-person account is followed by a few similar anecdotes from other developers and observers and concludes with some lessons about how to avoid killing people with your software.

“Don't Get Distracted”
Caleb Thompson, November 16, 2017
https://www.calebthompson.io/talks/dont-get-distracted/

The project owner conveniently left out its purpose when explaining the goals. I conveniently didn't focus too much on that part. It was great pay for me at the time. It was a great project. Maybe I just didn't want to know what it would be used for. I got distracted.

#ethics-in-daily-life #software-development #war

Machine Learning Sorting

2018-05-15⊺11:52:14-05:00

“An O(N) Sorting Algorithm: Machine Learning Sorting”
Hanqing Zhao and Yueban Luo, arXiv, May 11, 2018
https://arxiv.org/pdf/1805.04272.pdf

The authors propose a new method for sorting a gigantic array of arbitrary values in linear time: Select a fixed number (say 1000) values from the array and sort them. Using these values as a training set, train a three-layer neural network to estimate the position in the sorted array that any given value will occupy. Set up an array of buckets equal in size to the original array. Feed each value in the array into the neural network and put it in the bucket corresponding to the network's prediction of the value's position in the sorted array. A linear-time amount of post-processing can now ensure that every value is in a bucket that is within a fixed distance of its position in the sorted array. Apply insertion sort on the almost-sorted values in the array of buckets to build the actual sorted array. Since insertion sort runs in linear time on almost-sorted arrays, the whole process, including the training of the neural network, takes linear time.

I wouldn't have thought of that one.

Next month in arXiv: Adversarial sorting examples.

#algorithms #machine-learning #connections

Professionals Depicted by Models

2018-05-15⊺09:08:34-05:00

Stock photos of models portraying professionals reflect the stereotypes, misconceptions, or, um, imaginative design concepts of art directors.

“People Are Sharing Hilariously Bad Stock Photos of Their Jobs”
“Ilona”, BoredPanda, May 15, 2018
https://www.boredpanda.com/funny-bad-stock-photos-of-jobs-badstockphotosofmyjob/

#humor #stock-photos

PGP Vulnerability Discovered: Turn Off Automatic Decryption Until Patches Are Released

2018-05-14⊺10:28:40-05:00

Some security researchers have discovered a new attack on PGP. They have written a paper explaining how it works and plan to publish it tomorrow, but the Electronic Frontier Foundation has learned enough about it that they are sounding an alarm even before the details are public:

“Attention PGP Users: New Vulnerabilities Require You to Take Action Now”
Danny O'Brien and Gennie Gebhart, Deeplinks, Electronic Frontier Foundation, May 13, 2018
https://www.eff.org/deeplinks/2018/05/attention-pgp-users-new-vulnerabilities-require-you-take-action-now

A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with this research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages. …

Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email.

The story includes links to instructions provided by the EFF on how to temporarily disable the PGP plug-ins for Thunderbird, Apple Mail, and Outlook.

Update (2018-05-14⊺11:34:32-05:00)

The discoverers of the attack now have a Web site up and have published a draft of their paper there:

“Efail: Breaking S/MIME and OpenPGP Email Encryption Using Exfiltration Channels”
Damian Poddebniak, Christian Dresen, Jens Miller, Fabian Ising, Sebastian Schinzel, Simon Friedberger, Juraj Somorovsky, and Jörg Schwenk, May 14, 2018
https://efail.de/efail-attack-paper.pdf

There are actually two vulnerabilities. One exploits peculiarities, arguably errors, in mail user agents that parse and interpret HTML in messages after they have been decrypted. The other exploits a weakness in the OpenPGP standard: Under certain circumstances, the standard doesn't require integrity checks and doesn't specify what a decryption algorithm should do when an integrity check fails. Consequently, many mail user agents do the wrong thing when they receive a message that has been tampered with.

The Electronic Frontier Foundation has a follow-up, and other security authorities are providing quick analysis as well:

“Not So Pretty: What You Need to Know about E-Fail and the PGP Flaw”
Erica Portnoy, Danny O'Brien, and Nate Cardozo, Deeplinks, Electronic Frontier Foundation, May 14, 2018
https://www.eff.org/deeplinks/2018/05/not-so-pretty-what-you-need-know-about-e-fail-and-pgp-flaw-0

“Some Notes on eFail”
Robert Graham, Errata Security, May 14, 2018
https://blog.erratasec.com/2018/05/some-notes-on-efail.html

“New Vulnerabilities in Many PGP and S/MIME Enabled Email Clients”
Matthew Green, Twitter, May 14, 2018
https://mobile.twitter.com/matthew_d_green/status/995989254143606789

#Pretty-Good-Privacy #privacy #communications-security

Rewarding an Unrepentant Torturer

2018-05-10⊺07:56:13-05:00

“As If Nuremberg Never Happened”
Peter van Buren, The American Conservative, March 19, 2018
http://theamericanconservative.com/articles/gina-haspel-as-if-nuremberg-never-happened

Nothing will say more about who we are, across three American administrations — one that demanded torture, one that covered it up, and one that seeks to promote its bloody participants — than whether Gina Haspel becomes director of the CIA. …

Gina Haspel is now eligible for the CIA directorship because Barack Obama did not prosecute anyone for torture; he merely signed an executive order banning it in the future. He did not hold any truth commissions, and ensured that almost all government documents on the torture program remain classified. He did not prosecute the CIA officials who destroyed videotapes of the torture scenes. …

Unless Congress awakens to confront this nightmare and deny Gina Haspel's nomination as director of the CIA, torture will have transformed us and so it will consume us. Gina Haspel is a torturer. We are torturers. It is as if Nuremberg never happened.

#torture #Central-Intelligence-Agency #war

A Scheme-Based Extension Language Framework for Mac OS

2018-05-09⊺12:52:27-05:00

“Swift LispKit”
“objecthub,” GitHub, May 5, 2018
https://github.com/objecthub/swift-lispkit

LispKit is a framework for building Lisp-based extension and scripting languages for macOS applications. LispKit is fully written in the programming language Swift. LispKit implements a core language based on the R7RS (small) Scheme standard. It is extensible, allowing the inclusion of new native libraries written in Swift, of new libraries written in Scheme, as well as custom modifications of the core environment consisting of a compiler, a virtual machine as well as the core libraries.

It's free software, under the Apache 2.0 license.

This seems to be a kind of Mac OS analogue of GNU Guile.

#Scheme #Mac-OS #programming-languages

An Ethics Checklist for Black-Box Deciders

2018-05-07⊺16:51:40-05:00

An attempt to identify and explain the ethical preconditions for replacing social policies with algorithmic models. It's incomplete, but the questions that are included are relevant and salient, and the cautionary tales and links are thought-provoking.

“Math Can't Solve Everything: Questions We Need to Be Asking Before Deciding an Algorithm Is the Answer”
Jamie Williams and Lena Gunn, Deeplinks, Electronic Frontier Foundation, May 7, 2018
https://www.eff.org/deeplinks/2018/05/math-cant-solve-everything-questions-we-need-be-asking-deciding-algorithm-answer

#black-box-deciders #ethics-in-daily-life #algorithms

An Intelligibility Crisis in Machine Learning

2018-05-07⊺14:54:59-05:00

A surprising amount of research in artificial intelligence, and particularly in the field of machine learning, is being carried out by people who don't understand what they are doing, and yielding software that behaves in ways that are impossible to explain or understand. As a result, much of the work is difficult or impossible to reproduce or confirm.

“AI Researchers Allege that Machine Learning Is Alchemy”
Matthew Hutson, Science, May 3, 2018
https://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy

#artificial-intelligence #machine-learning #intelligibility-crisis

Super-Redacted

2018-05-07⊺14:18:02-05:00

Many years ago, the Church of Scientology created a one-act play featuring a conversation between an intrepid newspaper reporter and a disgruntled ex-Scientologist. The character of the ex-Scientologist was based on a real person, who had written a debunking paper that the church wished to discredit. The intrepid newspaper reporter was Lois Lane of the Daily Planet, and the play also features her co-workers Clark Kent and Jimmy Olsen.

An agent of the Federal Bureau of Investigation obtained a draft of this play and added it to the FBI's extensive collection of documents relating to Scientology. Last year, an investigative journalist submitted a request for those documents to the FBI under the Freedom of Information Act, and the FBI has gradually released a few of them, including the drama.

The FBI redacted the names of Lois Lane and Clark Kent, citing privacy concerns.

“The FBI Redacted the Names of DC Comic Book Characters to Protect Their Non-Existent Privacy”
Dell Cameron, Gizmodo, April 30, 2018
https://gizmodo.com/the-fbi-redacted-the-names-of-dc-comic-book-characters-1825658114

“Kryptonians are entitled to just as much privacy as other Americans.”

#Freedom-of-Information-Act #Federal-Bureau-of-Investigation #humor

Rowhammer on Android

2018-05-04⊺14:30:03-05:00

Security researchers have figured out how to conduct a Rowhammer attack on random-access memory using the GPUs in Android phones. The attack is implemented in JavaScript, so a malicious Web page can launch the attack as soon as the target loads the page into the browser on their phone.

“Drive-By Rowhammer Attack Uses GPU to Compromise an Android Phone”
Dan Goodin, Ars Technica, May 3, 2018
https://arstechnica.com/information-technology/2018/05/drive-by-rowhammer-attack-uses-gpu-to-compromise-an-android-phone/

#Rowhammer #Android

New Spectre Variants Discovered

2018-05-04⊺14:21:46-05:00

At least eight new variants of the Spectre vulnerability have been discovered and will be surfacing soon. One was discovered by Google's Project Zero team, which notoriously publishes the vulnerabilities they discover after ninety days, regardless of whether patches have been found. For that one, time's up on Monday, May 7.

Some of the vulnerabilities are more consequential or more easily exploited than others. One is reported to cause a serious problem for host systems running virtual machines: Malware running on a VM can break into the host or into other VMs on the same host.

“Exclusive: Spectre-NG — Multiple New Intel CPU Flaws Revealed, Several Serious”
Jürgen Schmidt, c't, May 3, 2018
https://www.heise.de/ct/artikel/Exclusive-Spectre-NG-Multiple-new-Intel-CPU-flaws-revealed-several-serious-4040648.html

One of the Spectre-NG flaws simplifies attacks across system boundaries to such an extent that we estimate the threat potential to be significantly higher than with Spectre. Specifically, an attacker could launch exploit code in a virtual machine (VM) and attack the host system from there — the server of a cloud hoster, for example. Alternatively, it could attack the VMs of other customers running on the same server. Passwords and secret keys for secure data transmission are highly sought-after targets on cloud systems and are acutely endangered by this gap. Intel's Software Guard Extensions (SGX), which are designed to protect sensitive data on cloud servers, are also not Spectre-safe.

Although attacks on other VMs or the host system were already possible in principle with Spectre, the real-world implementation required so much prior knowledge that it was extremely difficult. However, the aforementioned Spectre-NG vulnerability can be exploited quite easily for attacks across system boundaries, elevating the threat potential to a new level. Cloud service providers such as Amazon or Cloudflare and, of course, their customers are particularly affected.

#spectre #virtual-machines #cloud

Three Attempted Defenses against Adversarial Examples

2018-05-04⊺11:50:03-05:00

The authors provide summary descriptions of three proposed defensive strategies for training black-box deciders that block attempts to find adversarial examples: “adversarial training” (including adversarial examples in training sets), “defensive distillation” (“smoothing the model's decision surface” in the hope of eliminating the abrupt discontinuities that adversarial examples exploit), and “gradient masking” (flattening the gradients in the vicinity of a successfully classified training object so that it is computationally difficult for the software that finds adversarial examples to explore that space productively).

The authors' assessment is that the first two methods are whack-a-mole games in which new adversarial examples just pop up in previously unexplored parts of the input space, while the third doesn't work at all. It papers over exploitable weaknesses, but would-be attackers can simply develop their own models, not using gradient masking, and find the adversarial examples against those models. They will be equally effective against the gradient-masked decider.

“Is Attacking Machine Learning Easier Than Defending It?”
Ian Goodfellow and Nicolas Papernot, cleverhans-blog, February 15, 2017
http://www.cleverhans.io/security/privacy/ml/2017/02/15/why-attacking-machine-learning-is-easier-than-defending-it.html

The authors conclude with some possible explanations of the easy availability, robustness, and persistence of adversarial examples:

Adversarial examples are hard to defend against because it is hard to construct a theoretical model of the adversarial example crafting process. Adversarial examples are solutions to an optimization problem that is non-linear and non-convex for many ML models, including neural networks. Because we don't have good theoretical tools for describing the solutions to these complicated optimization problems, it is very hard to make any kind of theoretical argument that a defense will rule out a set of adversarial examples.

From another point of view, adversarial examples are hard to defend against because they require machine learning models to produce good outputs for every possible input. Most of the time, machine learning models work very well but only work on a very small amount of all the may possible inputs they might encounter.

The authors don't mention one explanation that I find particularly plausible. A trained deep neural network maps each possible input to a decision or a classification. The space of possible inputs is typically immense. I imagine the decision function that the network implements as carving this input space into regions, each region containing the inputs that will be classified in the same way. (The regions may or may not be simply connected; that doesn't matter.) Instead of cleanly separating the space into blocks with easily described shapes, the regions have extremely irregular boundaries that curl around one another and thread through one another and break one another up in complicated ways. The number of dimensions of the space is huge, so that the Euclidean distance between an arbitrarily chosen point inside one region and the nearest point that is inside some other (arbitrarily chosen) region is likely to be small, since there are so many directions in which to search. Evolution drives animal brains to make decisions and classifications that are not only mostly accurate but also intelligible (and energy-efficient). Training a deep neural network doesn't impose these additional constraints and so yields network configurations that implement decision functions that are much more likely to carve up their input spaces in these irremediably intricate ways.

#adversarial-examples #neural-networks #black-box-deciders

An Overview of Research on Adversarial Examples

2018-05-04⊺10:58:49-05:00

A snapshot of the state of research on adversarial examples at the time of publication (February 2017). It's partial, but there are a lot of links that look useful.

“Attacking Machine Learning with Adversarial Examples”
Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel, and Jack Clark, OpenAI, February 24, 2017
https://blog.openai.com/adversarial-example-research/

#adversarial-examples #machine-learning #black-box-deciders

Complex AI Decision-Making through Debates

2018-05-04⊺10:42:16-05:00

In many contexts, it would be foolish to trust software decision systems that cannot explain or justify their decisions. However, the structure of neural networks seems to preclude explanations that use concepts and categories that are sufficiently high-level to be intelligible to human beings.

One approach to making black-box deciders more trustworthy is to make their training sets less noisy, so that they more accurately reflect the actual goals and interests of the human users of the system. In complex decision-making, one difficulty is that human beings are not very accurate in assessing complex situations and determining what the “right” solution should be.

Some anonymous researchers at Open AI propose to provide human trainers with two AI assistants, one to find the best justification for a decision and the other to challenge and rebut that justification. Before pronouncing on each training example, the human trainer listens to a debate between these two systems and decides which of them is right. The theory is that the AIs can dumb down their descriptions of the situation to the point where even a human being can judge it accurately.

“AI Safety via Debate”
Geoffrey Irving and Dario Amodei
OpenAI, May 3, 2018
https://blog.openai.com/debate/

One approach to aligning AI experts with human goals and preferences is to ask humans at training time which behaviors are safe and useful. While promising, this method requires humans to recognize good or bad behavior; in many situations an agent's behavior may be too complex for a human to understand, or the task itself may be hard to judge or demonstrate. Examples include environments with very large, non-visual observation spaces — for instance, an agent that acts in a computer security-related environment, or an agent that coordinates a large set of industrial robots.

How can we augment humans so that they can effectively supervise advanced AI systems? One way is to take advantage of the AI itself to help with the supervision, asking the AI (or a separate AI) to point out flaws in any proposed action. To achieve this, we reframe the learning problem as a game played between two agents, where the agents have an argument with each other and the human judges the exchange. Even if the agents have a more advanced understanding of the problem that the human, the human may be able to judge which agent has the better argument (similar to expert witnesses arguing to convince a jury). …

There are some fundamental limitations to the debate model that may require it to be improved or augmented with other methods. Debate does not attempt to address issues like adversarial examples or distributional shift — it is a way to get a training signal for complex goals, not a way to guarantee robustness of such goals.

It's sad that the designers and advocates of this method automatically frame it as a possible way to overcome some of the deficiencies of human trainers rather than as a way of overcoming the opacity and inexplicability of deep neural networks, which is one of the fundamental flaws of black-box deciders and a key reason that they can't be trusted in cases where explicability is crucial. To my mind, a debate between opposing AIs would be even more useful during testing and in real-world use than at the training stage. Such a debate might expose a valid high-level rationale for accepting or rejecting the output of a black-box decider, and also might reveal that no such rationale exists.

#black-box-deciders #artificial-intelligence #machine-learning

Hashtag index

This work is licensed under a Creative Commons Attribution-ShareAlike License.

Atom feed

John David Stone (havgl@unity.homelinux.net)

created June 1, 2014 · last revised December 10, 2018