“It's Time for an RSS Revival”
Brian Barrett, Wired, March 30, 2018
Well, probably not, but it sure would be a better way for someone fleeing Facebook to assemble a newsfeed than just switching over to Google News.
Eight vignettes from a foreseeable future.
Cory Doctorow, this., August 17, 2017
Bruce Schneier provides a nice overview of the mechanics of surveillance capitalism and expresses the hope that government regulation will bring it under control eventually, even though he doesn't expect Congress to produce any such regulation “anytime soon.”
“It's Not Just Facebook. Thousands of Companies Are Spying On You”
Bruce Schneier, CNN.com, March 26, 2018
Schneier also offers another solution, which likewise strikes me as wishful thinking:
One of the responses to the Cambridge Analytica scandal is that people are deleting their Facebook accounts. It's hard to do right, and doesn't do anything about the data that Facebook collects about people who don't use Facebook. But it's a start. The market can put pressure on these companies to reduce their spying on us, but it can only do that if we force the industry out of its secret shadows.
Schneier advances this idea so diffidently and undercuts it so thoroughly with his qualifications that I find it difficult to take this passage seriously. #DeleteFacebook has become a meme, and that's a vaguely hopeful sign, but the account deleters are not going to exert any significant market pressure unless they become at least as numerous as the thousands of new users who join Facebook every day.
If you packed away your Elf on the Shelf surveillance kit with the Christmas decorations, but still feel the need to let your kids know that they have no privacy whatever, you'll be relieved to know that now any religious holiday can be used as a pretext for spying on the little ones. For Easter, there's “Peep on a Perch,” a plushie shaped like a marshmallow Peep, sold with a book that explains life in a Total Information Awareness regime to toddlers.
“Peep on a Perch (Peeps)”
Random House, February 13, 2018
Start a new Easter tradition!
No word yet on availability dates for “Golem on the Gueridon” (Yom Kippur), “Hajji on the Highboy” (Ramadan), and “Aillen on the Ottoman” (Samhain).
“The Cambridge Analytica Con”
Yasha Levine, The Baffler, March 21, 2018
What Cambridge Analytica is accused of doing — siphoning people's data, compiling profiles, and then deploying that information to influence them to vote a certain way — Facebook and Silicon Valley giants like Google do every day, indeed, every minute we're logged on, on a far greater and more invasive scale.
Today's internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That's the name of the game. It isn't just Facebook or Cambridge Analytica or even Google. It's Amazon. It's eBay. It's Palantir. It's Angry Birds. It's Movie Pass. It's Lockheed Martin. It's every app you've ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.
All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. …
Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google's many products and platforms — from email correspondence to Web searches and internet browsing — is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments — say, bank statements or testing results from a medical lab — are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status[,] is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn't have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address. …
The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.
The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook's data system. So armed, digital political operatives can then use those people's social networks to identify other prospective voters who might be supportive of their candidate — and then target them with a whole new tidal wave of ads.
Both of the Establishment parties have been using surveillance companies' dossiers to target their propaganda since at least 2008 and now sink tens of millions, perhaps hundreds of millions, of dollars on such projects in every election cycle. So it's not too likely that we're going to see Congress regulate technology companies in any way that would interfere with the smooth operation of the mechanism or even slightly alienate the power brokers. Zuckerberg is pleading with Congress to pass regulatory legislation because he is now confident that he is ready to play the game of regulatory capture and will be better at it than most of his competitors.
Inside Higher Ed ran an opinion piece today complaining about Facebook's attempt to shift the blame for the unlicensed transfer of personal data about its users onto the Cambridge University senior research associate who nominally made the agreement with Facebook:
“Facebook's Professor Problem”
Mark Bartholomew, Inside Higher Ed, March 28, 2018
The best practices of academia need to find more purchase at Facebook. For studies on humans, it is necessary in the university setting to obtain informed consent. As a private business, Facebook is not obligated to comply with this standard, and it doesn't. Instead, it need only make sure that the terms of any potential human experimentation are covered under its capacious and unreadable terms of service.
By contrast, in the realm of academic research, scientists cannot wave a bunch of impenetrable legalese under a test subject's nose and receive a blank check to do what they want. Moreover, university internal review boards act as a safeguard, making sure that even when consent is informed, the benefits of any proposed research outweigh their costs to the participants. University IRBs need to make sure they fulfill their responsibilities when it comes to experimenting on social media users.
More importantly, it is time that Facebook starts following academics' best practices rather than use them for cover.
Although Bartholomew identifies a significant ethical failure on Facebook's part, that particular failure isn't the one at the heart of the current controversy and doesn't fully explain what the academic involved did wrong. Aleksandr Kogan's principal ethical offense was his participation in a money- and data-laundering scheme. He received money from Cambridge Analytica and used it to pay participants in his mostly fake research project, the users of his personality-quiz app, in exchange for which they gave Kogan full access to their Facebook profiles and those of their “Facebook friends.” Kogan collected the data and passed it back to Cambridge Analytica. He provided the cover, the false front, for what was basically Cambridge Analytica's straightforward purchase of parts of Facebook's dossiers on some of their users.
Neither Cambridge Analytica nor Facebook wanted to acknowledge publicly that the purpose of the project was to improve the targeting of political propaganda to gullible American Facebook users. To conceal this purpose, Cambridge Analytica concocted the cover story and hired Kogan to implement it.
Kogan claims that he didn't know anything about what Cambridge Analytica was doing with the data he shared with them but simply felt that they were entitled to use that data however they liked, since they had paid for it. But I doubt he's that stupid.
Researchers have discovered another vulnerability in the Spectre family. Like Spectre, it exposes side effects of speculative execution of instructions. Whereas Spectre extracted the information from the cache, the new attack, called BranchScope, extracts it from the directional predictor, which is the component of the branch-prediction unit inside the processor.
The directional predictor is updated each time a conditional branch instruction is executed, even speculatively, and maintains an estimate of the likelihood that the condition will be true the next time the same conditional branch instruction is executed. It leaks information because its state is modified during speculative execution and not restored afterward if the speculative execution path turns out not to have been the correct one.
“BranchScope: A New Side-Channel Attack on Directional Branch Predictor”
Dmitry Evtyushkin, Ryan Riley, Nael Abu-Ghazaleh, and Dmitry Ponomarev, Proceedings of the 23rd ACM International Conference on Architectural Support for Programming Languages and Operating Systems, March 24, 2018
“As Predicted, More Branch Prediction Processor Attacks Are Discovered”
Peter Bright, Ars Technica, March 26, 2018
You might not expect that giving the Facebook app on your Android phone permission to read your contact list would also allow Facebook to transcribe all the metadata from all the calls and text messages in your phone's entire call history. But it did, at least until Google deprecated version 4.0 of the Android API — which was about five months ago.
“Facebook Scraped Call, Text Message Data for Years from Android Phones”
Sean Gallegher, Ars Technica, March 24, 2018
The United States Department of Justice is continuing its doomed quest for an encryption system that simultaneously conceals texts from some people who should not have access to them and reveals them to other people who should not have access to them. They have begun to organize research teams and conferences to discuss ways of forcing or tricking people who want strong encryption into accepting weak encryption instead.
The new feature of this story is that some of the researchers who have gone over to the dark side are now identified by name: Ray Ozzie, formerly Chief Technical Officer and Chief Software Architect for (of course) Microsoft Corporation; Stefan Savage, Irwin and Joan Jacobs Chair in Information and Computer Science at the University of California, San Diego; and Ernie Brickell, Chief Security Architect, Intel Corporation.
The presence of Brickell and Ozzie guarantee that users should never trust encryption systems supplied in Intel hardware or as part of the Windows operating system, but should continue to use systems, such as
GPG, that are entirely implemented in open-source software.
“Justice Dept. Revives Push to Mandate a Way to Unlock Phones”
Charlie Savage, The New York Times, March 25, 2018
“My Cow Game Extracted Your Facebook Data”
Ian Bogost, The Atlantic, March 22, 2018
Facebook has vowed to audit companies that have collected, shared, or sold large volumes of data in violation of its policy, but the company cannot close the Pandora's box it opened a decade ago, when it first allowed external apps to collect Facebook user data. That information is now in the hands of thousands, maybe millions of people.
An Oxford lecturer in international development prescribes what needs to be done in order to restore privacy to Internet users.
“‘Cambridge Analytica’: Surveillance Is the DNA of the Platform Economy”
Ivan Manokha, Open Democracy, March 23, 2018
The current social mobilization against Facebook resembles the actions of activists who, in opposition to neoliberal globalization, smash a McDonald's window during a demonstration.
What we need is a total redefinition of the right to privacy (which was codified as a universal human right in 1948, long before the Internet), to guarantee its respect, both offline and online.
What we need is a body of international law that will provide regulations and oversight for the collection and use of data.
What is required is an explicit and concise formulation of terms and conditions which, in a few sentences, will specify how users' data will be used.
It is important to seize the opportunity presented by the Cambridge Analytica scandal to push for these more fundamental changes.
But the Cambridge Analytica scandal provides no such opportunity. The Snowden revelations (2013) were the last, best opportunity, and at that time we looked at the facts and decided not to do anything about them. The current reaction to Cambridge Analytica is just some extremely faint and transient buyer's remorse, amplified by a few politicians who assumed for years that their opponents didn't understand technology well enough to turn it to their advantage.
“Poem: I Lik the Form”
O. Westin, Micro SF/F, March 21, 2018
Forty-five Republicans and ten Democrats. *Sigh.* The Democrats were Coons of Delaware, Cortez Masto of Nevada, Donnelly of Indiana, Heitkamp of North Dakota, Jones of Alabama (*sigh*), Manchin of West Virginia, Menendez of New Jersey, Nelson of Florida, Reed of Rhode Island, and Whitehouse of Rhode Island.
“15 Years after the Invasion of Iraq, Here Are the Dems Who Just Voted for Endless War in Yemen”
Sarah Lazare, In These Times, March 20, 2018
The House of Representatives has now passed, and the Senate is on the verge of passing, the Clarifiying Overseas Use of Data Act, institutionalizing and giving notional legal cover to warrantless surveillance programs, both inside the United States and in other countries, both by American national-security and law-enforcement agencies and, if the governments agree, by their counterparts in dozens of other countries. It explicitly grants such agencies access “the contents of a wire or electronic communication and any record or any other information” about a target of investigation.
Congress is working this week on a massive budget bill. The CLOUD Act was embedded in the House version of that bill so as to ensure its passage. Microsoft, Facebook, Google, and Apple are on record as supporting the it, apparently because it would save them the cost of repeatedly litigating government demands for their users' information. Under the CLOUD Act, the grounds for such litigation would be removed, and those companies could simply yield up that information as soon as the government(s) requested it.
A coalition of advocates for privacy, civil liberties, and human rights, headed by the American Civil Liberties Union, is opposing the bill, but is unlikely to be able to block it.
“S.2383 – CLOUD Act”
Library of Congress, February 6, 2018
“H.R.4943 – CLOUD Act”
Library of Congress, February 6, 2018
“Tech Companies' Letter of Support for Senate CLOUD Act”
Apple, Facebook, Google, Microsoft, and Oath, Data Law, February 6, 2018
“CLOUD Act Coalition Letter”
CLOUD Act Coalition, American Civil Liberties Union, March 12, 2018
“A New Backdoor around the Fourth Amendment: The CLOUD Act”
David Ruiz, Deeplinks, Electronic Frontier Foundation, March 13, 2018
The CLOUD Act allows the president to enter an executive agreement with a foreign nation known for human rights abuses. Using its CLOUD Act powers, police from that nation inevitably will collect Americans' communications. They can share the content of those communications with the U.S. government under the flawed “significant harm” test. The U.S. governemnt can use that content against these Americans. A judge need not approve the data collection before it is carried out. At no point need probably cause be shown. At no point need a search warrant be obtained.
This is wrong. … The backdoor proposed in the CLOUD Act violates our Fourth Amendment right to privacy by granting unconstitutional access to our private lives online.
“Congress Could Sneak a Bill Threatening Global Privacy into Law”
Rhett Jones, Gizmodo, March 15, 2018
“House Stables Extraterritorial Search Permissions onto 2,232-Page Budget Bill; Passes It”
Tim Cushing, Techdirt, March 22, 2018
Surveillance is essential to Facebook's business model. It collects and compiles enormous amounts of personal data on its users (and non-users), and it sells to its customers — advertisers, academics, political operatives, and others — the privilege of creating applications that collect and compile still more personal data.
In theory, Facebook doesn't actually sell its dossiers to its customers. It only licenses the data, or the right to collect data, retaining control over any further dissemination so as to maintain its ownership of its most valuable intellectual property. In practice, Facebook has no effective means of preventing its customers from copying and distributing any data they have legitimately obtained. The licenses that it relies on turn out to be quite difficult to enforce.
In 2014, a senior research associate at Cambridge University, Aleksandr Kogan, wrote a Facebook app called “thisismydigitallife.” Superficially, it was a personality quiz, but the people who signed up to take it gave Kogan permission to access their Facebook profiles and the Facebook profiles of the people they had friended. Facebook approved this arrangement but stipulated that the data that Kogan collected be used solely for the purpose of academic research.
Kogan agreed to this stipulation and proceeded to collect millions of Facebook profiles through the app. Instead of mining the data at Cambridge, however, he set up a company called Global Science Research and carried out his supposedly academic research there. Global Science Research had a million-dollar contract with another company, SCL Group. One of SCL's subsidiaries, SCL Elections, had recently secured funding to set up a new corporation, Cambridge Analytica, to explore the use of data-mining techniques to find reliable correlations between the personalities and “likes” of individual Facebook users on one hand and their political views and behaviors on the other. Because Kogan's research was funded, at least in part, by Cambridge Analytica, he apparently saw nothing wrong with sharing with his employers the data on which his research was based.
It's quite possible that sharing this data with a commercial enterprise violated Kogan's understanding with Facebook. It may also be a violation of UK data-protection laws, because Kogan asked the people who used his app only for their permission to collect and study their personal data, not for permission to share it with (or sell it to) third parties.
However, the only thing that prevented Cambridge Analytica from obtaining the same data directly from Facebook is that the license would probably have cost them much more money. Nothing in Facebook's notoriously lax, mutable, and labyrinthine privacy policies would have obstructed such a transaction if the price was right. Facebook's dossiers are their principal product, and selling access to them is their principal source of revenue.
Facebook now claims that Kogan and Cambridge Analytica have violated its terms of service and has closed their Facebook accounts. Lawsuits and threats of lawsuits are now flying in all directions, and some members of Congress are threatening to launch terrifying inquisitions into the monstrous abuse of the American electoral process that Cambridge Analytica supposedly perpetrated with the assistance of Kogan's data. However, there are now so many unlicensed copies of the data that there is no way to ensure that all of them will ever be erased, or even located. Now that arbitrarily large amounts of data can be copied quickly and inexpensively, and now that multiple backups of valuable data are the norm, the idea of restricting the distribution of data through licensing is a non-starter. It can't possibly work.
There's another reason why the lawsuits and the fulminations of member of Congress are idle, from the point of view of ordinary Facebook users (and non-users): Surveillance is essential to Facebook's business model. If Facebook stopped collecting and compiling personal data and erased its current stores, it would quickly go bankrupt. But once the dossiers exist, it is inevitable that they will be copied and disseminated, and once they are copied and disseminated, it is impossible ever to recover and destroy all of the copies, data-protection and privacy laws notwithstanding.
Instead (as Mark Zuckerberg's Facebook post on this subject makes clear), Facebook will continue to build up massive dossiers as fast as it can and will continue to use the information in those dossiers as it sees fit. The steps that Zuckerberg describes as “protecting users' data” are all designed to protect Facebook's proprietary interest in everyone's personal data, to prevent or at least obstruct the propagation of the dossiers to unworthy outsiders.
“Suspending Cambridge Analytica and SCL Group from Facebook”
Paul Grewal, Facebook Newsroom, March 16, 2018
“How Trump Consultants Exploited the Facebook Data of Millions”
Matthew Rosenberg, Nicholas Confessore, and Carole Cadwalladr, The New York Times, March 17, 2018
“Cambridge Analytica Responds to Facebook Announcement”
Cambridge Analytica, March 17, 2018
“‘I Made Steve Bannon's Psychological Warfare Tool’: Meet the Data War Whistleblower”
Carole Cadwalladr, The Guardian, March 18, 2018
“Cambridge Analytica's Ad Targeting Is the Reason Facebook Exists”
Jason Koebler, Motherboard, March 19, 2018
Though Cambridge Analytica's specific use of user data to help a political campaign is something we haven't publicly seen on this scale before, it is exactly the type of use that Facebook's platform is designed for, has facilitated for years, and continues to facilitate every day. At its core, Facebook is an advertising platform that makes almost all of its money because it and the companies that use its platform know so much about you.
Facebook continues to be a financially successful company precisely because its platform has enabled the types of person-specific targeting that Cambridge Analytica did. …
“The incentive is to extract every iota of value out of users,” Hartzog [Woodrow Hartzog, Professor of Law and Computer Science at Northeastern University] said. “The service is built around those incentives. You have to convince people to share as much information as possible so you click on as many ads as possible and then feel good about doing it. This is the operating ethos for the entire social internet.”
“Facebook's Surveillance Machine”
Zeynep Tufekci, The New York Times, March 19, 2018
Billions of dollars are being made at the expense of our public sphere and our politics, and crucial decisions are being made unilaterally, and without recourse or accountability.
“Then Why Is Anyone Still on Facebook?”
Wolf Richter, Wolf Street, March 20, 2018
So now there's a hue and cry in the media about Facebook, put together by reporters who are still active on Facebook and who have no intention of quitting Facebook. There has been no panicked rush to “delete” accounts. There has been no massive movement to quit Facebook forever. Facebook does what it does because it does it, and because it's so powerful that it can do it. A whole ecosystem around it depends on the consumer data it collects. …
Yes, there will be the usual ceremonies … CEO Zuckerberg may get to address the Judiciary Committee in Congress. The questions thrown at him for public consumption will be pointed. But behind the scenes, away from the cameras, there will be the usual backslapping between lawmakers and corporations. Publicly, there will be some wrist-slapping and some lawsuits, and all this will be settled and squared away in due time. Life will go on. Facebook will continue to collect the data because consumers continue to surrender their data to Facebook voluntarily. And third parties will continue to have access to this data. …
People who are still active on Facebook cannot be helped. They should just enjoy the benefits of having their lives exposed to the world and serving as a worthy tool and resource for corporate interests, political shenanigans, election manipulators, jealous exes, and other facts of life.
“Facebook Sued by Investors over Voter-Profile Harvesting”
Christie Smythe and Kartikay Mehrotra, Bloomberg Technology, March 20, 2018
“The Researcher Who Gave Cambridge Analytica Facebook Data on 50 Million Americans Thought It Was ‘Totally Normal’”
Kaleigh Rogers, Motherboard, March 21, 2018
Kogan said he was under the impression that what he was doing was completely normal.
“What was communicated to me strongly was that thousands and maybe tens of thousands of apps were doing the exact same thing and that this was a pretty normal use case and a normal situation for usage of Facebook data,” Kogan said.
“Facebook's Mark Zuckerberg Vows to Bolster Privacy amid Cambridge Analytica Crisis”
Sheera Frenkel and Kevin Roose, The New York Times, March 21, 2018
“It's Too Late”
Jason Koebler, Motherboard, March 21, 2018
We're starting to see a new genre of advice columns, featuring instructions on how to use some piece of modern technology safely, given of course that it's impossible to really use it safely, since the users' understanding of how it works and what they want to accomplish with it is flatly incompatible with its design and with the business model of its maker and licensor.
The journalists who write in this genre are people who know better than to try to use the technology, but use it anyway because their jobs require it and because they know that their readers are going to use it as well, even those who also know better than to try.
“The Motherboard Guide to Using Facebook Safely”
Lorenzo Franceschi-Bicchierai, Motherboard, March 21, 2018
You can't really stop all collection. In fact, even if you leave Facebook (or have never been part of the social network), the company is still gathering data on you and building a shadow profile in case you ever join. …
Facebook's entire existence is predicated on tracking and collecting information about you. If that concept makes you feel creeped out, then perhaps you should quit it. But if you are willing to trade that off for using a free service to connect with friends, there's still some steps you can take to limit your exposure.
It is possible to fool face-recognition (FR) systems into misidentifying one person A as some specified other person B by projecting an pattern of infrared light onto A's face when the recognizer's camera photographs it, creating a customized adversarial example. Since light in the near infrared can be detected by surveillance cameras but not by human eyes, other people cannot detect the masquerade, even at close range. To project the light patterns, researchers had person A wear a baseball cap with tiny infrared LEDs tucked up under the bill.
“Invisible Mask: Practical Attacks on Face Recognition with Infrared”
Zhe Zhou, Di Tang, Xiaofeng Wang, Weili Han, Xiangyu Liu, and Kehuan Zhang, arXiv, March 13, 2018
In this paper, we present the first approach that makes it possible to apply [an] automatically-identified, unique adversarial example to [a] human face in an inconspicuous way [that is] completely invisible to human eyes. As a result, the adversary masquerading as someone else will be able to walk on the street, without any noticeable anomaly to other individuals[,] but appearing to be a completely different person to the FR system behind surveillance cameras.
Dan Geer explains the social, political, and security risks of programmatically displacing manual processes and alternative algorithmic designs with interdependent, standardized, or centralized technologies. Such monoliths may or may not be fragile, but their only failure modes are catastrophic.
Daniel E. Geer, Jr., Hoover Institution, February 7, 2018
If an algorithm cannot be verified then do not trust it.
To be precise, algorithms derived from machine learning must never be trusted unless the “Why?” of decisions those algorithms make can be usefully examined on demand. This dictum of “interrogatability” may or may not be effectively design-assured while there is still time to do so — that is, to do so pre-dependence. Once the change to design-assure interrogatability is lost — that is to say once dependence on a non-interrogatable algorithm is consummated — going back to non-self-modifying algorithms will prove to be costly, if even possible. …
The central thesis of this essay is that an accessible, continuously exercised analog option is essential to the national security and to the inclusionary polity we hold dear. …
As a matter of national security, keeping non-technical exits open requires action and it requires it now. It will not happen by itself, and it will never again be as cheap or feasible as it is now. Never again will national security and individual freedom jointly share a call for the same initiative at the same time. In a former age, Dostoevsky told us, “The degree of civilization in a society can be judged by entering its prisons.” From this point on, that judgement will be passed on how well we preserve a full life for those opting out of digitalization. There is no higher embodiment of national security than that.
It is now becoming commonplace for security offices at colleges and universities to monitor the social-media accounts of members of the College community for potential threats, crimes, and miscellaneous troublemaking. Sometimes they outsource the work to specialist companies (such as Social Sentinel, which curates and customizes a list of several thousand words whose appearance in posts can trigger investigations).
“Big Brother: College Edition”
Jeremy Bauer-Wolf, Inside Higher Ed, December 21, 2017
“Social Media Monitoring: Beneficial or Big Brother?”
Amy Rock, Campus Safety Magazine, March 12, 2018
“University Police Surveil Student Social Media in Attempt to Make Campus Safer”
Ryne Weiss, Foundation for Individual Rights in Education, March 16, 2018
Put yourself in the shoes of a student on campus. What would you do if you're aware that anything you post may be flagged by the school administration or police for containing one of the keywords in Social Sentinel's library of harm? Do you make the decision to tweet less? Do you restrict your posts to friends only? It seems hard to imagine how you could moderate your tweets to avoid thousands of words when you have no idea what they are.
And assume you do get flagged and questioned by police. Many people would probably change their behavior. And while people might want to be mindful of what they post publicly online, fear of police and their school monitoring them and misinterpreting their messages shouldn't be something students have to navigate. …
The free exchange of ideas on campus is an invaluable and irreplaceable part of the ideal college experience, and the chilling effect of student social media surveillance actively undermines that.
It appears that many researchers in machine learning, including some who profess to be scientists, are not keeping proper records of their experiments. Even with the assistance of version-control systems, they often fail to write down which versions of code libraries they are using, where their data sets come from and what they contain, how they massaged and cleaned their data sets, and what tweaks they made to their algorithms and to the configuration and initialization of their networks.
They redesign their experiments on the fly, interrupt and restart them, cherry-pick results from various runs, and reuse partially trained neural networks as starting points for subsequent experiments without properly documenting the process.
As a result, machine learning as a discipline is now facing a devastating crisis: researchers cannot reproduce one another's experiments, or even their own, and so cannot confirm their results.
“The Machine Learning Reproducibility Crisis”
Pete Warden, Pete Warden's Blog, March 19, 2018
In many real-world cases, the research won't have made notes or remember exactly what she did, so even she won't be able to reproduce the model. Even if she can, the frameworks the model code depend[s] on can change over time, sometimes radically, so she'd need to also snapshot the whole system she was using to ensure that things work. I've found ML researchers to be incredibly generous with their time when I've contacted them for help reproducing model results, but it's often [a] months-long task even with assistance from the original author.
A research team at Boston University has discovered a technique for partially encrypting messages so as to make decryption extremely expensive, but not impossible. They call their partial-encryption system “cryptographic crumpling.” The computation required to decrypt a message prepared by this technique is useless in the decryption of any other message, so there are no economies of scale — the decryptor must pay the high computational price all over again for each new message.
The researchers offer their system as a way of resolving the “second crypto wars” between government officials, who insist that the makers of all commercial-grade encryption software must provide back doors for law-enforcement and national-security agencies, and privacy advocates, who insist that only strong, end-to-end encryption will protect their rights. The researchers argue that their system would allow well-funded government agencies to access the partially encrypted data in exceptional cases, but would force those agencies to choose their targets so carefully that the privacy rights of ordinary users would not be significantly affected.
But this proposal doesn't really accommodate either side. Government officials say they need to decrypt messages that could contain evidence of crime or terrorism regardless of how many such messages there are, and so would not be content with a system in which their budget constrains their ability to decrypt. And privacy advocates would surely note that if government officials with legitimate interests in the contents of communications were able to perform the decryptions, so too would corporations bent on industrial espionage, hostile foreign governments, and even well-funded hacking teams. A back door works equally well for everyone who has the resources to open it.
Such failed attempts at compromise reinforce the conclusion (which most security analysts reached long ago) that the requirements of government officials and privacy advocates are incompatible.
“Cryptographic Crumpling: The Encryption ‘Middle Ground’ for Government Surveillance”
Charlie Osborne, Zero Day, March 19, 2018
The people of the United States are neither strongly committed to the numerous wars that our military is waging nor strongly opposed to them. We are barely aware of them and prefer not to think about them.
“America's Phony War”
William J. Astore, TomDispatch, March 15, 2018
The definition of twenty-first-century phony war, on the other hand, is its lack of clarity, its lack of purpose, its lack of any true imperative for national survival (despite a never-ending hysteria over the “terrorist threat”). The fog it produces is especially disorienting. Americans today have little idea “why we fight” … Meanwhile, with such a lack of national involvement and accountability, there's no pressure for the Pentagon or the rest of the national security state to up its game; there's no one even to point out that wherever the U.S. military has gone into battle in these years, yet more terror groups have subsequently sprouted like so many malignant weeds. Bureaucracy and mediocrity go unchallenged; massive boosts in military spending reward incompetency and the creation of a series of quagmire-like “generational” wars.
Security researchers at CTS Labs have audited the hardware design and software configuration of some recent processors manufactured by Advanced Micro Devices (AMD). The audit turned up thirteen serious vulnerabilities. CTS Labs has prepared a white paper that lists and analyzes these vulnerabilities and demonstrates each one with proof-of-concept code. The researchers have sent copies of the white paper to “AMD, select security companies that can develop mitigations, and the U.S. regulators.” They published a redacted version of the white paper that omits all of the demonstrations and any parts of the analysis that they thought would be too helpful to malicious attackers.
To achieve the preconditions for any of these vulnerabilities, attackers would need to have root privileges on the machine they wanted to exploit. Even so, the vulnerabilities are serious, because they make it possible to install malware in system components that are normally inaccessible. Rebooting the computer, rolling back to a recovery image, or even reinstalling the operating system would have no effect on malware stored in those components. Depending on the local network configuration, the reported vulnerabilities may also make it easier for the attacker to break into other systems and to acquire root privileges on them.
The white paper asserts that AMD introduced two of the vulnerabilities into its chipset by outsourcing much of the design and implementation of one of the subsystems (“Promontory”) to another chip manufacturer, ASMedia:
The Promontory chipset is powered by an internal microcontroller that manages the chip's various hardware peripherals. Its built-in USB controller is primarily based on ASMedia ASM1142, which in turn is based on the company's older ASM1042. In our assessment, these controllers, which are commonly found on motherboards made by Taiwanese OEMs, have sub-standard security and no mitigation against exploitation. They are plagued with security vulnerabilities in both firmware and hardware, allowing attackers to run arbitrary code insider the chip, or to re-flash the chip with permanent malware. This, in turn, could allow for firmware-based malware that has full control over the system, yet is notoriously difficult to detect or remove. Such malware could manipulate the operating system through Direct Memory Access (DMA), while remaining resilient against most endpoint security products.
Specifically, the researchers discovered two sets of “hidden manufacturer backdoors,” some in the firmware and some in the hardware, any one of which provides an avenue for the introduction of malware into the Promontory processor.
“Severe Security Advisory on AMD Processors”
CTS Labs, March 2018
“Severe Security Advisory on AMD Processors”
CTS Labs, AMD Flaws, March 2018
“Clarification about the Recent Vulnerabilities”
CTS Labs, March 2018
“A Raft of Flaws in AMD Chips Makes Bad Hacks Much, Much Worse”
Dan Goodin, Ars Technica, March 13, 2018
“Researchers Say AMD Processors Have Serious Vulnerabilities and Backdoors”
Lorenzo Franceschi-Bicchierai, Motherboard, March 13, 2018
Now, in preparation for the European Union's General Data Protection Regulation, PayPal has published the list of these third party service providers and, er, other business partners.
“List of Third Parties (Other Than PayPal Customers) with Whom Personal Information May Be Shared”
PayPal, January 1, 2018
Dare you to read to the end.
Some librarians at UCLA compared twelve thousand preprint articles from arXiv with the versions of the same articles that were ultimately published in academic journals and found that they were practically the same, except that the preprints were available to the scholarly community much sooner (typically six months earlier).
“Comparing Published Scientific Journal Articles to Their Pre-print Versions”
Martin Klein, Peter Broadwell, Sharon E. Farb, and Todd Grappone, arXiv, April 18, 2016
Within the boundaries of our corpus, there are no significant differences in aggregate between pre-prints and their corresponding final versions. In addition, the vast majority of pre-prints (90% – 95%) are published by the open access pre-print service first and later by a commercial publisher.
“Research Shows that Published Versions of Papers in Costly Academic Titles Add Almost Nothing to the Freely-Available Preprints They Are Based On”
Glyn Moody, Techdirt, March 13, 2018
Libraries should not be paying for expensive subscriptions to academic journals, but simply providing access to the equivalent preprints, which offer almost identical texts free of charge … Researchers should concentrate on preprints, and forget about journals. Of course, that means that academic institutions must do the same when it comes to evaluating the publications of scholars applying for posts.
The kicker: The paper in which the UCLA librarians reported their results was published last month in the International Journal on Digital Libraries … behind a paywall … twenty-two months after the preprint appeared at arXiv.
A fable about the use of Big Data in human institutions.
Carlos Bueno, December 2016
The fundamental strategy for dealing with large amounts of data was compression. Huge streams of numbers were converted by various clever tricks into streams tiny enough for humans to handle, who then decided what to do. If you really think about it … the entire purpose of data-driven decision-making is to compress ungodly infinitudes of numbers down to a single bit of decision: yes or no. …
The Hyperlogloglog was the size of a small housepet and was modeled on the human brain. It was capable of handling unlimited amounts of input data via the simple technique of immediately throwing it away.
In some circumstances, it is ill-advised, even dangerous, to rely on black-box deciders, because they cannot explain their decisions. But in some circumstances it is also ill-advised, even dangerous, to rely on AI decision systems that do explain their decisions, because their explanations are inevitably phony, simplistic, misguided, or out of touch with reality. A weaker criterion of adequacy based on experience in dealing with unreliable decision systems such as imperfect human beings may be more suitable.
Carlos Bueno, Ribbonfarm, March 13, 2018
There are many efforts to design AIs that can explain their reasoning. I suspect they are not going to work out. We have a hard enough time explaining the implications of regular science, and the stuff we call AI is basically pre-scientific. There's little theory or causation, only correlation. We truly don't know how they work. And yet we can't anthropomorphizing the damn things. Expecting a glorified syllogism to stand up on its hind legs and explain its corner cases is laughable. …
Asking for “just so” narrative explanations from AI is not going to work. Testimony is a preliterate tradition with well-known failure modes even within our own species. Think about it this way: do you really want to unleash these things on the task of optimizing for convincing excuses?
AI that can be grasped intuitively would be a good thing, if for no other reason than to help us build better ones. … But the real issue is not that AIs must be explainable, but justifiable.
“Arizona's Anti-BDS Statute Lands Arizona State University in Federal Court”
Adam Steinbaugh, Foundation for Individual Rights in Education, March 12, 2018
Earlier this month, the Council on American-Islamic Relations filed a lawsuit against Arizona State University on behalf of Hatem Bazian, a Berkeley lecturer and chair of American Muslims for Palestine, who was invited to speak at ASU by the university's Muslim Students Association. The agreement provided to him by ASU contained a provision — required by Arizona state law — demanding that he affirm he will not boycott Israel. Bazian's planned presentation concerned the “Boycott, Divestment, and Sanctions” (BDS) movement targeting Israel.
Arizona's statute prohibits any “public entity” from entering into any “contract with a company to acquire or dispose of services … unless the contract includes a written certification that the company is not currently engaged in, and agrees for the duration of the contract to not engage in, a boycott of Israel.” The statute broadly defines “boycott,” in turn, to include not simply refusing to engage in business, but undertaking “other actions that are intended to limit commercial relations with Israel.”
The article reproduces the offending contract and includes some plausible legal argumentation explaining why the legislative language “contract with a company to acquire or dispose of services” really does apply to a contract with an individual to give a talk.
Arizona is only one of twenty-four states, including Iowa, that has passed ridiculous and patently unconstitutional legislation of this kind. Pulling this crap is going to get us into several different kinds of trouble:
“State Anti-BDS Laws Are Hitting Unintended Targets and Nobody's Happy”
Ron Kampeas, The Times of Israel, October 24, 2017
Although accurate image captioning is strictly more difficult than image classification, and can produce a larger variety of results, the strategies that have been developed for constructing adversarial examples against image classifiers can be adapted to image-captioning systems as well.
“Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning”
Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh, arXiv, December 6, 2017
Changing only one (carefully selected) pixel in an image can cause black-box deciders to misclassify the image in an astonishing number of cases. The authors develop a method that makes fewer assumptions about the mechanics of the deciders that it is trying to fool than other methods of constructing adversarial examples.
“One Pixel Attack for Fooling Deep Neural Networks”
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai, arXiv, February 22, 2018
“AI Has a Hallucination Problem That's Proving Tough to Fix”
Tom Simonite, Wired, March 9, 2018
One of the main difficulties in determining the origins and intentions of network-based attacks on the security of systems is that many attackers deliberately try to mislead analysts and often succeed. Here's a detailed description of a case in which the attackers attempted to fly a false flag but (probably) did not deceive the analysts.
“The Devil's in the Rich Header”
“GReAT”, SecureList, March 8, 2018
The case is the attack against some of the servers used to organize and run the 2018 Winter Olympics in Pyeongcheng. Some of the malware files installed by the attackers had been compiled with Microsoft Visual Studio and so contained metadata headers for the Microsoft linker to process. But at least one of the metadata headers had been replaced at some point after the linked binary executable had been produced. The replacement header came from a much earlier version of Microsoft Visual Studio that couldn't possibly have been used to produce the executable (which referred to dynamically linked libraries that didn't exist at the time of the earlier version). It was, however, an exact duplicate of a header on a file from a previously known malware package, one that the analysts had already attributed to an attack team they called “Lazarus.” The file that originally bore that header performed a similar function, but in a more limited way that used fewer system calls.
The existence of the fake Rich header from Lazarus samples in the new OlympicDestroyer samples indicates an intricate false flag operation designed to attribute this attack to the Lazarus group. The attacker's knowledge of the Rich header is complemented by their gamble that a security researcher would discover it and use it for attribution. … This newly published research consolidates the theory that blaming the Lazarus group for the attack was parts of the attacker's strategy.
The Wikimedia Project, which acts as the legal interface to Wikipedia, receives a few hundred requests each year to alter or delete the contents of Wikipedia pages. They have just released their eighth semi-annual transparency report. It's interesting to see how they deal with such requests.
They investigate the requests that claim that copyright has been violated. Between July and December 2017, there were twelve of these. Two turned out to be genuine, and Wikipedia removed the copyrighted material. The rest were either mistaken claims or instances of fair use.
They got one request to remove a page that cited the 2014 decision of the Court of Justice of the European Union establishing that European citizens have a “right be be forgotten,” based on the EU's principles of personal privacy. The Wikimedia Project did not grant this request, but doesn't give any of the details of the case, noting only that requests of this type “negatively impact the free exchange of information in the public interest.”
Finally, they received 343 other requests to alter or delete content. They didn't grant any of these. Instead, they replied to the requesters by noting that Wikipedia is a wiki so that anyone can alter or delete content:
Our first action is to refer requesters to experienced volunteers who can explain project policies and provide them with assistance.
“Wikimedia Releases Eighth Transparency Report”
Jim Buatti, Leighanna Mixter, and Aeryn Palmer, Wikimedia Blog, March 6, 2018
“Wikimedia's Transparency Report: Guys, We're a Wiki, Don't Demand We Take Stuff Down”
Mike Masnick, Techdirt, March 9, 2018
As a proponent and practitioner of technological skepticism, the practice of assessing technological innovations and judging whether they will make me any wiser, better, happier, or more helpful to others before deciding whether or not to adopt them, I'm gratified to see other people thinking along the same lines and trying to organize the outraged victims of thoughtlessly misdesigned technology.
“There Are No Guardrails on Our Privacy Dystopia”
David Golumbia and Chris Gilliard, Motherboard, March 9, 2018
Tech companies … have demonstrated that they are neither capable nor responsible enough to imagine what harms their technologies may do. If there is any hope for building digital technology that does not include an open door to wolves, recent experience has demonstrated that this must include robust engagement from the non-technical — expert and amateur alike — not just in response to the effects of technologies, but to the proposed functions of those technologies in the first place.
“What If Designers Took a Hippocratic Oath?”
Sanjena Sathian, Point Taken, PBS, January 1, 2016
What are we really looking at? A next-generation consumer advocacy battle, one in which a victory depends not on class action lawsuits or government oversight but on popular awareness and education.
“Hackers Can Use Cortana to Open Websites on Windows 10 Even If Your PC Is Locked”
Tristan Greene, The Next Web, March 7, 2018
A pair of independent researchers yesterday uncovered a particularly worrisome security vulnerability in Microsoft's Windows 10. If your PC's OS was installed with default settings this could affect you.
The simple “hack” involves activating Cortana via voice command to open websites on a PC that's been locked.
Well, duh. This was completely obvious from the beginning to any Windows 10 user who glanced at the page describing the settings for Cortana. One of the options is “Use Cortana even when my device is locked.” Microsoft turned this on by default because it wants to listen in on Windows 10 users even when the users try to lock their PCs. The “researchers” “uncovered” this feature by noticing that it was there and trying it out. This scarcely qualifies as a “hack,” or even as a “‘hack.’”
It seems unlikely that Microsoft will regard this routine surveillance feature as “worrisome.” From the user's point of view, it is of course a gigantic security hole. Since the user doesn't own Windows, however, that point of view is essentially irrelevant. The real owner, Microsoft, has already expressed its point of view by creating the feature and making sure that it's on by default. That's the end of the story.
“Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains”
Tegjyot Singh Sethi and Mehmed Kantardzic, arXiv, March 23, 2017
Machine learning operates under the assumption of stationarity, i.e. the training and testing distributions are assumed to be identically and independently distributed … . This assumption is often violated in an adversarial setting, as adversaries gain nothing by generating samples which are blocked by a defender's system. …
In an adversarial environment, the accuracy of classification has little significance, if an attacker can easily evade detection by intelligently perturbing the input samples.
Most of the paper deals with strategies for probing black-box deciders that are only accessible as services, through APIs or Web interfaces. The justification for the strategies is more heuristic than theoretical, but the authors give some evidence that they are good enough to generate adversarial examples for a lot of real-world black-box deciders.
What I liked most about the paper was the application of “the security mindset.”
This 2005 paper is a kind of precursor to the current literature about adversarial examples: It shows how to modify an e-mail that a naive Bayesian spam filter correctly classifies as spam so as to induce the same filter to misclassify it as ham. The modification consists in replacing a small number of words.
Daniel Lowd and Christopher Meek, ACM Conference on Knowledge Discovery and Data Mining, August 2005
This is the paper that introduced the term “adversarial examples” and initiated the systematic study and construction of such examples.
“Intriguing Properties of Neural Networks”
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus, arXiv, February 19, 2014
When carefully selected adversarial inputs are presented to image classifiers implemented as deep neural networks and trained using machine-learning techniques, the classifiers fail badly, because the functions from inputs to outputs that they implement are highly discontinuous. Such adversarial inputs are not difficult to generate and are not dependent on particular training data or learning regimens, since the same adversarial examples are misclassified by image classifiers trained on different data under different learning regimens.
The authors present a second, possibly related discovery: Even the pseudo-neurons (“units”) that are close to the output layer usually don't carry semantic information individually; the contributions of linear combinations of such units are indistinguishable in nature from the contributions of the units individually, so any semantic information that is present emerges holistically from the entire layer.
These results suggest that the deep neural networks that are learned by backpropagation have nonintuitive characteristics and intrinsic blind spots, whose structure is connected to the data distribution in a non-obvious way. …
If the network can generalize well, how can it be confused by these adversarial negatives, which are indistinguishable from the regular examples? Possible explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the natural numbers), and so it is found near … virtually every test case.
In a search of 970,898 live “smart” contracts on the Ethereum blockchain, using a new tool for formal verification, some researchers found 34,200 that have serious bugs resulting in (a) money (well, Ether cryptocurrency) held in escrow becoming permanently inaccessible to all parties; (b) money becoming available on demand to any Ethereum user; or (c) any Ethereum user being able to terminate the contract (again leaving all escrowed money inaccessible). Many of the buggy contracts were duplicates, but there were 2,365 non-duplicate bugs in the contracts examined.
“Finding the Greedy, Prodigal, and Suicidal Contracts at Scale”
Ivica Nikolić, Aashish Kolluri, Ilya Sergey, Prateek Saxena, and Aquinas Hobor, arXiv, February 16, 2018
“Secret Surveillance and the Legacy of Torture Have Paralyzed the USS Cole Bombing Trial at Guantánamo”
Shilpa Jindia, The Intercept, March 5, 2018
Last month, a judge a Guantánamo Bay suspended indefinitely the trial of Abd al-Rahim al-Nashiri, paralyzing one of the most high-profile cases to go before the island prison's military commissions system. The February 16 decision ended a monthslong standoff with defense lawyers who claimed that they could not do their work for fear of government surveillance. …
Nashiri's entire civilian defense team resigned last October, citing an irresolvable ethical conflict: They did not believe that they could meet with their client and work on the case without being spied on by U.S. government agencies. Because of the byzantine rules governing classified materials at Guantánamo, they lawyers still can't explain exactly why they believe this to be the case to the public or to their client.
The reason that the lawyers believe that they can't meet their client without being spied on is that in June they received a memo from the military supervisor of all of the Guantánamo detainees' legal teams, Brigadier General John Baker, saying that he could no longer assure them that they could meet their client anywhere inside the concentration camp without being miked, monitored, and recorded. The lawyers did some checking and confirmed Baker's suspicions, though they can't say how because it's classified.
Taking off from the controversial keylogger implemented in Cascading Style Sheets, this article surveys the various security holes that Web authors sometimes open up by incautiously borrowing or linking to CSS that they have not inspected and vetted.
“Third Party CSS Is Not Safe”
Jake Archibald, February 27, 2018
“The Secret Sharer: Measuring Unintended Neural Network Memorization and Extracting Secrets”
Nicholas Carlini, Chang Liu, Jernej Kos, Úlfar Erlingsson, and Dawn Song, arXiv, February 22, 2018
Given access to a fully trained black-box decider, it is surprisingly easy to recover personally identifiable information (such as Social Security numbers and credit-card information) that was present in its training set. This paper works out some of the methods and suggests adding noise to the training data, as in differential-privacy schemes, as a solution.
The neural network's implicit memorization of information in its training data is not due to overfitting and occurs even if additional validation is carried out during the learning process specifically to stop the training before overfitting occurs.
The secrets of a black-box decider need not be extracted by brute-force testing of all possible secrets. The authors propose a more efficient algorithm that uses a priority queue of partially determined secrets, to organize the search. (The measure used as the priority is the total entropy of the posited components of the secret as they are filled in during the search process. When all of the components have been filled in.)
Black-box classifiers are prone to hasty generalizations from training data. If you train your neural network on a lot of pictures depicting sheep on grassy hills, the network learns to posit sheep whenever it sees a grassy hill.
“Do Neural Nets Dream of Electric Sheep?”
Janelle Shane, Postcards from the Frontiers of Science, March 2, 2018
Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities.
Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs.
Paint them orange, and they become flowers.
On the other hand, as Shane observes, the Microsoft Azure classifier is hypervigilant about giraffes (“due to a rumored overabundance of giraffes in the original dataset”).
Most Internet users long ago surrendered control of their devices through their Web browsers, way back in the era when the common offenses were dropping cookies on the disk, providing ActiveX controls, and using
MARQUEE tags. Nowadays we just allow Web servers to do whatever they hell they want with our devices. We try to ignore the advertising and clean up the malware and ransomware installations afterwards, as best we can.
However, a substantial number of holdouts still run ad blockers that constrain the malicious activities of Web providers, or more usually the ad-network operators that pay them for access. Recently, ad blockers have tried to prevent the execution of scripts for mining cryptocurrency by blacklisting the domains of known offenders. But now the miners are dynamically coining domain names from which to download the ads containing the mining scripts.
NoScript could still protect us, though. If we were interested.
“Who Is Stealing My Power III: An Adnetwork Company Case Study”
Zhang Zaifeng, Netlab 360, February 24, 2018
“Ad Network Circumvents Blockers to Hijack Browsers for Cryptocurrency Mining”
Charlie Osborne, Zero Day, March 2, 2018
An essay by a public intellectual reflecting on the value of privacy and pointing out that many people prefer it to constant social interaction. This retrospective view, bordering on denialism, is surely one of the last expressions of the values that prevailed in the era before total and inevitable surveillance.
“Luxuriating in Privacy”
Sarah Perry, ribbonfarm, March 1, 2018
Privacy is wonderful in and of itself, and privacy keeps the peace.
Yes. And its disappearance is a reflection of the prevalence of total war.
The senior director for research and assessment of the Association of American Colleges and Universities indignantly defends her chosen profession against the disparaging critique published in the New York Times last Sunday.
“What Assessment Is Really About”
Kate Drezek McConnell, Inside Higher Ed, March 1, 2018
It turns out that the flagship product of her fifteen years' work in college-level assessment is a collection of sixteen “rubrics,” tendentiously entitled VALUE (Valid Assessment of Learning in Undergraduate Education).
For the uninitiated, rubrics are simply an explicit articulation of (1) faculty expectations of students vis-à-vis their learning, as well as (2) descriptions of what student work looks like at progressively higher levels of performance.
That is to say, a rubric is a learning objective together with a sequence of descriptions of observable student behaviors that supposedly characterize levels of performance on that learning objective. The levels are helpfully numbered from 1 to 4 (with an additional option, numbered 0, to represent failure to achieve even “baseline” performance) so that any one student's levels of performance on the various elements of, say, the “critical thinking” rubric, can be readily added together to obtain a critical-thinking score, which in turn can then be compared to other students' critical-thinking scores and to the same student's critical-thinking scores at earlier and later times, and so on. VALUE currently comprises sixteen of these rubrics, and I imagine that the next step would be to add together a student's scores on all sixteen to obtain a clear, objective measurement of … well, of something … I'll let Drezek McConnell explain:
Philosophically, pedagogically, and methodologically, VALUE is designed to afford faculty the opportunity to flex their creative muscles and capture evidence that the curriculum they own and the courses they teach do indeed promote students' development of the very learning outcomes that are essential to a liberal, and liberating, education.
Far from a reductionist tool, research has demonstrated that the VALUE rubrics empower faculty members to help translate the learning that takes place when a student completes an assignment they crafted, one that aligns with and promotes disciplinary knowledge, and — at its best — gives students not just the requisite skills for the single assignment, but also advances the ultimate purpose of college teaching: long-term retention of knowledge, skills and abilities and the ability to transfer those skills to a completely new or novel situation.
Ah, yes. Rubrics empower me to help translate what my students actually say and do into … what? Into a number, of course. No way that this process would convert me into a reductionist tool! Research has demonstrated that it advances the ultimate purpose of college teaching!