Glider from the game of Life, rising from the left

Unity

Archives

Blogroll

Topic: #sabotage

Adversarial Reprogramming of Deep Neural Networks

2018-07-12⊺10:34:53-05:00

Some researchers at Google Brain have discovered a technique by which a black-box decider that has been successfully trained for one task can be used to perform an unrelated computation by embedding the inputs for that computation in the input to the black-box decider and extracting the result of the unrelated computation from the output of the black-box decider.

One of the proof-of-concept experiments that the paper describes uses ImageNet for recognition of handwritten numerals. The inputs for the numeral-recognition problem are small images (twenty-eight pixels high and twenty-eight pixels wide), and the task is to determine which of the ten decimal numerals each input represents. Normally ImageNet takes much larger, full-color images as inputs and outputs a tag identifying what's in the picture, chosen from a list of a thousand fixed tags. Numerals aren't included in that list, so ImageNet never outputs a numeral. It's not designed to be a recognizer for handwritten numerals.

But ImageNet can be coopted. The researchers took the first ten tags from the ImageNet tag list and associated them with numerals (tench ↦ 0, goldfish ↦ 1, etc.). Then they set up an optimization problem: Find the pattern of pixels making up a large image so as to maximize the ImageNet's success in “interpreting” the images that result when each small image from the training set for the numeral-recognition task is embedded at the center of the large image. An interpretation counts as correct, for this purpose, if ImageNet returns the tag that is mapped to the correct numeral.

The pixel pattern that emerges from this optimization problem looks like video snow; it doesn't have any human-recognizable elements. When one of the small handwritten numerals is embedded at the center, the image looks to a human being like a white handwritten numeral in a small black square surrounded by this random-looking video snow. But if the numeral is a 9, ImageNet thinks that it looks very like an ostrich, whereas if it's a 3, then ImageNet thinks that it depicts a tiger shark.

Note that ImageNet is not being retrained here and isn't doing anything that it wouldn't do right out of the box. The “training” step here is just finding the solution to the optimization problem: What pattern of pixels will most effectively trick ImageNet into doing the computation we want it to do when the input data for our problem is embedded into that pattern of pixels?

The researchers call the optimized pixel patterns “adversarial programs.”

Besides the numeral-recognition task, the researchers were also able to trick ImageNet — six different variants of ImageNet, in fact — into doing two other standard classification tasks, just by finding optimal pixel patterns — adversarial programs — in which to embed the input data.

“Adversarial Reprogramming of Neural Networks”
Gamaleldin F. Elsayed, Ian Goodfellow, and Jascha Sohl-Dickstein, arXiv, June 28, 2018
https://arxiv.org/pdf/1806.11146.pdf

#adversarial-reprogramming #adversarial-examples #ImageNet #sabotage

Second-Stage Malware Downloading

2018-04-16⊺13:48:34-05:00

Now that Google has learned to scrutinize Android apps, refusing to distribute most of the apps that contain malware through the Google Play store, makers of malware targeted at specific institutions and groups have learned to postpone their malware downloads until after the apps have been installed and configured. That way, Google doesn't get the opportunity to detect the malware beforehand, and the innocent-appearing app can acquire all the privileges it needs to download and activate the malware once the target's defenses are down.

“Fake Android Apps Used for Targeted Surveillance Found in Google Play”
Zack Whittaker, Zero Day, April 16, 2018
https://www.zdnet.com/article/fake-android-apps-used-for-targeted-surveillance-found-in-google-play

#android-malware #sabotage

Wired on Adversarial Examples

2018-03-12⊺14:38:06-05:00

“AI Has a Hallucination Problem That's Proving Tough to Fix”
Tom Simonite, Wired, March 9, 2018
https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix

It's not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination.

#adversarial-examples #neural-networks #sabotage

Hashtag index

This work is licensed under a Creative Commons Attribution-ShareAlike License.

Atom feed

John David Stone (havgl@unity.homelinux.net)

created June 1, 2014 · last revised December 10, 2018