The Senate Committee on Commerce, Science, and Transportation called in some of the honchos in the Transportation Security Administration to ask a few pointed questions about the “Quiet Skies” program, under which the TSA dispatches teams of air marshals to surveil people who fidget too much in airports or get glassy-eyed waiting for their flights to be called. The witnesses boasted that they had monitored five thousand suspicious-looking passengers and confirmed that not one of them posed a threat to anyone's safety.
Given this perfect track record, the TSA plans to continue the program and to re-educate the air marshals who have complained about its pointlessness. It's not really pointless if it contributes to the oppressive atmosphere of modern American airports and helps to assure passengers that every move they make is monitored by armed law-enforcement officers. That sense of living in a police state is America's strongest defense against terrorism.
“TSA Says ‘Quiet Skies’ Surveillance Snared Zero Threats”
Jana Winter, The Boston Globe, August 3, 2018
Federal air marshals have closely monitored about 5,000 US citizens on domestic flights in recent months under the controversial “Quiet Skies” program, but none were deemed so suspicious that they required further scrutiny …
The TSA defended the program, said it would continue, and announced plans to better educate and communicate with members of the Federal Air Marshal Service …
“TSA Admits ‘Quiet Skies’ Surveillance Program Is Useless, Promises to Continue Engaging in Useless Surveillance”
Tim Cushing, Techdirt, August 10, 2018
CARTS, HORSES IN EMBARRASSING MIXUP, SAY TSA OFFICIALS
Cart-horse confusion expected to continue for the foreseeable future
So few terrorists now travel by air in the United States that the Transportation Security Administration has taken to placing teams of air marshals on flights to monitor the behavior of entirely innocuous persons. Judging from the name of the program (“Quiet Skies”), I imagine it's make-work or perhaps practice.
“TSA Is Tracking Regular Travelers Like Terrorists in Secret Surveillance Program”
Jana Winter, The Boston Globe, July 28, 2018
Federal air marshals have begun following ordinary US citizens not suspected of a crime or on any terrorist watch list and collecting extensive information about their movements and behavior under a new domestic surveillance program that is drawing criticism from within the agency.
The previously undisclosed program, called “Quiet Skies,” specifically targets travelers who “are not under investigation by any agency and are not in the Terrorist Screening Data Base,” according to a Transportation Security Administration bulletin in March. …
TSA officials, in a written statement to the Globe, broadly defended the agency's efforts to deter potential acts of terror. But the agency declined to discuss whether Quiet Skies has intercepted any threats, or even to confirm that the program exists.
Release of such information “would make passengers less safe,” spokesman James Gregory said in the statement.
Already under Quiet Skies, thousands of unsuspecting Americans have been subjected to targeted airport and inflight surveillance, carried out by small teams of armed, undercover air marshals, government documents show. The teams document whether passengers fidget, use a computer, have a “jump” in their Adam's apple or a “cold penetrating stare,” among other behaviors, according to the record.
Having long established the privilege of forcing air travellers to submit to unconstitutional searches and seizures, the TSA has apparently decided to extend its exemption from the rule of law to spy on people who use their phones in airports, study their reflections in store windows, wait to the end of the boarding process to get on the plane, or previously travelled on an international flight. Better watch your step.
“Tech's ‘Dirty Secret’: App Developers Sift Through Your Gmail”
Douglas MacMillan, Stocks Newsfeed, July 2, 2018
But the Internet giant continues to let hundreds of outside software developers scan the inboxes of millions of Gmail users who signed up for email-based services offering shopping price comparisons, automated travel-itinerary planners or other tools. Google does little to police those developers, who train the computers — and, in some cases, employees — to read their users' emails …
Letting employees read user emails has become “common practice” for companies that collect this type of data, says Thede Loder, the former chief technology officer at eDataSource Inc. … He says engineers at eDataSource occasionally reviewed emails when building and improving software algorithms.
“Some people might consider that to be a dirty secret,” says Mr. Loder. “It's kind of reality.”
“A Chinese-Style Digital Dystopia Isn't As Far Away As We Think”
Matt Stoller, Buzzfeed, June 27, 2018
We accept price discrimination all the time; going to the movies and getting a senior discount is price discrimination. But in that case, the decision of how to discriminate is done by class; it is publicly posted; and everyone accepts that, in this case, seniors get a discount. It is a public decision to discriminate.
Discriminating on an individual level is different and allows for powerful exploitation and manipulation of the citizen. In areas with first-degree price discrimination, like car insurance or credit cards, there are often gender- or race-based pricing choices. With increasing datafication of society, we can see this increasingly organized to the level of the individual.
An airline could, for instance, analyze your email for the words “death in the family” and “travel,” look at your credit limit, and then offer you a price based on this information. Or imagine a group of companies putting together a common list of troublemakers, perhaps negative online reviewers or commenters or consumers who frequently return items. All of a sudden, for no obvious reason, someone who returns an item to one store might find that prices on a host of socially [essential] goods have [gone] up.
Corporations generally deny they do anything like this or even that they can. But …
We are now in a totally unregulated world of lawless web giants who operate as the core infrastructure for our society. They can use their data and power to discriminate and exploit, and the strategy now for companies like AT&T is to emulate them, or die. And the deep links that intelligence agencies have with these giants suggest that this power can, with a flip of a few switches, be easily weaponized by the state.
“Alexa, When's My Next Class? This University Is Giving Out Amazon Echo Dots”
Elizabeth Weise, USA Today, June 20, 2018
Not to mention the problem of Alexa “simply overhearing” otherwise private information spoken aloud by anyone within microphone range …
Starting this fall, some students at Northeastern University in Boston will be given the option of getting an Echo Dot smart speaker linked to their university accounts. They'll be able to ask Amazon's Alexa what time their classes are, how much money's left on their food card and even how much they own the bursar's office.
The program gives students instant access to information they would have to call or go online for, as well as taking pressure off the school's offices. It also makes Amazon's digital assistant a go-to source for a generation who will inhabit a world in which talking to computers is commonplace and who will soon have paychecks to spend.
At the same time, it raises questions about security and privacy for young adults living in close quarters, often on their own for the first time. …
Alexa can't differentiate between different people's voices, so a prying roommate could be an issue, said Paul Bischoff, a privacy advocate with Comparitech.com, a security and privacy review site.
“There's also the problem of third parties simply overhearing otherwise private information spoken aloud by Alexa,” he said.
Practically all video games, apps, consoles, and platforms now collect location data, contact lists, and biometric data on players and sell it to advertisers.
“Privacy in Gaming”
N. Cameron Russell, Joel R. Reidenberg, and Sumyung Moon, Center on Law and Information Policy, Fordham Law School, March 19, 2018
There are currently many different ways that game companies collect data from users, including through hardware (cameras, sensors, and microphones), platform features (social media aspects and abilities for other user-generated content), and tracking technologies (cookies and beacons). Location data and biometric data — like facial, voice, heart rate, weight, skin response, brain activity, and eye-tracking data — is now routinely collected while gaming. In mobile gaming, requests for access to a user's contacts or address book are common. …
There may also be an interrelationship between data collection, game functionalities, and external hardware items like the Apple Watch or the smartphone device. Moreover, gaming companies have business relationships with each other. Data flows extend beyond the game and game console, and game data is often aggregated with external partners and sources. Every game and platform … examined states that game data may be shared with advertising platforms or used for advertising purposes. Although there are some avenues for opt-outs and user choice, users may have difficulty discerning the identities of third party affiliates with whom gaming companies share data even after reading the relevant privacy policies.
The Department of Justice has indicted a former aide to the Senate Intelligence Committee, James Wolfe, and several journalists, including Ali Watkins of the New York Times. The indictment is based on inferences from detailed and comprehensive surveillance of Wolfe and Watkins and many of their colleagues and friends, including interception of their telephone communications, e-mail, travel and financial records, and so on.
“Ex-Senate Aide Charged in Leak Case Where Times Reporter's Records Were Seized”
Adam Goldman, Nicholas Fandos, and Katie Benner, The New York Times, June 7, 2018
“Trump's Justice Department Escalates Its Disturbing Crackdown on Leaks by Seizing New York Times Reporter's Phone and Email Records”
Trevor Timm, Freedom of the Press Foundation, June 7, 2018
Some of the exchanges were transmitted through Signal, an application that uses strong end-to-end encryption. The second article speculates that the feds must have acquired these messages by seizing Wolfe's mobile phone and breaking into it.
When the Department of Homeland Security or the Federal Bureau of Investigation wants access to some information that was originally obtained by an intelligence service using some unconstitutional mass-surveillance technique, it sometimes seeks a veneer of legal protection for its actions by making a “collection request” to the Foreign Intelligence Surveillance Court, acting as a “Court of Review.”
Usually the FISC rubber-stamps these applications. (This is hardly surprising, since the proceedings of the FISC are not adversarial — no one is present to represent the interests of the victims of surveillance or to point out violations of the Fourth Amendment or of international law.) Sometimes the court makes a few comments, asks the applicant to rewrite the request, and then rubber-stamps the results.
Since last year, however, when Congress renewed and revised the law authorizing these ludicrous procedures, it created an arrangement under which the court can, if it chooses, invite a “friend of the court” to comment on the proposals it is reviewing, and in particular to deal with any novel or significant question of law that those proposals raise. Typically the job is given to former Department of Justice officials who can be relied on not to introduce undesirable innovations into the cozy arrangements between the FISC and the state-security apparatus.
Evidently the possibility of exposing their requests to the skeptical gaze of these third parties has succeeded in spooking the applicants. Last year, there were three occasions on which the FISC saw fit to invite in a friend of the court, and in each case the government agency that made the application chose to withdraw it instead of allowing anyone else to see it.
Marcy Wheeler speculates on the motives for these withdrawals:
“In 2017, the Government Withdrew Three FISA Collection Requests Rather Than Face an Amicus Review”
“emptywheel”, emptywheel, April 26, 2017
That the government has been withdrawing requests rather than submitting them to the scrutiny of an amicus suggests several things.
First, it may be withdrawing such applications out of reluctance to share details of such techniques even with a cleared amicus, not even one of the three who served as very senior DOJ officials in the past. If that's right, that would reflect some pretty exotic requests …
Second, … past history has shown that the government often finds another way to get information denied by the FISC, and that may have happened with these three requests.
Finally, remember that as part of 702 reauthorization last year, [Senator] Ron Wyden warned that reauthorization should include language preventing the government from demanding that companies provide technical assistance. … Some of these withdrawn requests … may reflect such onerous technical requests.
If you use the Internet or interact regularly with people who do, companies like Google and Facebook have compiled dossiers on you, regardless of whether you have ever set up accounts with them or used their services.
“Facebook Is Tracking Me Even Though I'm Not on Facebook”
Daniel Kahn Gillmor, Free Future, American Civil Liberties Union, April 5, 2018
Nearly every Website you visit that has a “Like” button is actually encouraging your browser to tell Facebook about your browsing habits. Even if you don't click on the “Like” button, displaying it requires your browser to send a request to Facebook's servers for the “Like” button itself. That request includes information mentioning the name of the page you are vising and any Facebook-specific cookies your browser might have collected. (See Facebook's own description of this process.) …
This makes it possible for Facebook to created a detailed picture of your browsing history — even if you've never even visited Facebook directly, let alone signed up for a Facebook account.
Think about most of the web pages you've visited — how many of them don't have a “Like” button? If you administer a website and you include a “Like” button on every page, you're helping Facebook to build profiles of your visitors, even those who have opted out of the social network. …
The profiles that Facebook builds on non-users don't necessarily include so-called “personally identifiable information” (PII) like names or email addresses. But they do include fairly unique patterns. Using Chromium's NetLog dumping, I performed a simple five-minute browsing test last week that included visits to various sites — but not Facebook. In that test, the PII-free data that was sent to Facebook included information about which news articles I was reading, my dietary preferences, and my hobbies.
Given the precision of this kind of mapping and targeting, “PII” isn't necessary to reveal my identity. How many vegans examine specifications for computer hardware from the ACLU's offices while reading about Cambridge Analytica? Anyway, if Facebook combined that information with the “web bug” from the email mentioned above — which is clearly linked to my name and e-mail address — no guesswork would be required.
“Chinese Government Forces Residents to Install Surveillance App with Awful Security”
Joseph Cox, Motherboard, April 9, 2018
JingWang scans for specific files stored on the device, including HTML, text, and images, by comparing the phone's contents to a list of MD5 hashes. …
JingWang also sends a device's phone number, device model, MAC address, unique IMEI number, and metadata of any files found in external storage that it deems dangerous to a remote server. …
As for handling that data, … JingWang exfiltrated data without any sort of encryption, instead transferring it all in plaintext. The app updates are not digitally signed either, meaning they could be swapped for something else without a device noticing.
“Your Own Devices Will Give the Next Cambridge Analytica Far More Power to Influence Your Vote”
Justin Hendrix and David Carroll, MIT Technology Review, April 2, 2018
Though it's not clear if Cambridge Analytica's behavioral profiling and microtargeting had any measurable effect on the 2016 US election, these technologies are advancing quickly — faster than academics can study their effects and certainly faster than policymakers can respond. The next generation of such firms will almost certainly deliver on the promise. …
In the next few years, … we'll see the convergence of multiple disciplines, including data mining, artificial intelligence, psychology, marketing, economics, and experiential design theory. These methods will combine with an exponential increase in the number of surveillance sensors we introduce into our homes and communities, from voice assistants to internet-of-things devices that track people as they move through the day. Our devices will get better at detecting facial expressions, interpreting speech, and analyzing psychological signals.
In other words, the machines will know us better tomorrow than they do today. They will certainly have the data. While a General Data Protection Regulation is about to take effect in the European Union, the US is headed in the opposite direction. Facebook may have clamped down on access to its data, but there is more information about citizens on the market than ever before … not to mention all the data sloshing around thanks to hacks and misuse.
The “exponential increase in the number of surveillance sensors we introduce into our homes and communities” is already well along. We're past the knee of the curve and climbing the shaft of the hockey stick. Soon the only constraints will be bandwidth and network congestion, as a trillion cameras, microphones, and sensors all try to deliver their data in real time to the marketers, propagandists, spies, and law-enforcement teams poised in eager expectation.
An active user of Internet services decided to take advantage of offers by Google and Facebook to provide him with copies of his dossier. They were much more comprehensive and diverse in their sources than he expected. Not surprisingly, they included a lot of files, photographs, and e-mail messages that he “deleted,” including, for instance, his PGP private key.
His dossier at Google ran to 5.5 gigabytes, and the one that Facebook compiled was 600 megabytes.
“Are You Ready? Here Is All the Data Facebook and Google Have on You”
Dylan Curran, The Guardian, March 30, 2018
If you packed away your Elf on the Shelf surveillance kit with the Christmas decorations, but still feel the need to let your kids know that they have no privacy whatever, you'll be relieved to know that now any religious holiday can be used as a pretext for spying on the little ones. For Easter, there's “Peep on a Perch,” a plushie shaped like a marshmallow Peep, sold with a book that explains life in a Total Information Awareness regime to toddlers.
“Peep on a Perch (Peeps)”
Random House, February 13, 2018
Start a new Easter tradition!
No word yet on availability dates for “Golem on the Gueridon” (Yom Kippur), “Hajji on the Highboy” (Ramadan), and “Aillen on the Ottoman” (Samhain).
“The Cambridge Analytica Con”
Yasha Levine, The Baffler, March 21, 2018
What Cambridge Analytica is accused of doing — siphoning people's data, compiling profiles, and then deploying that information to influence them to vote a certain way — Facebook and Silicon Valley giants like Google do every day, indeed, every minute we're logged on, on a far greater and more invasive scale.
Today's internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That's the name of the game. It isn't just Facebook or Cambridge Analytica or even Google. It's Amazon. It's eBay. It's Palantir. It's Angry Birds. It's Movie Pass. It's Lockheed Martin. It's every app you've ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package.
All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. …
Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google's many products and platforms — from email correspondence to Web searches and internet browsing — is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments — say, bank statements or testing results from a medical lab — are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status[,] is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn't have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address. …
The enormous commercial interest that political campaigns have shown in social media has earned them privileged attention from Silicon Valley platforms in return. Facebook runs a separate political division specifically geared to help its customers target and influence voters.
The company even allows political campaigns to upload their own lists of potential voters and supporters directly into Facebook's data system. So armed, digital political operatives can then use those people's social networks to identify other prospective voters who might be supportive of their candidate — and then target them with a whole new tidal wave of ads.
Both of the Establishment parties have been using surveillance companies' dossiers to target their propaganda since at least 2008 and now sink tens of millions, perhaps hundreds of millions, of dollars on such projects in every election cycle. So it's not too likely that we're going to see Congress regulate technology companies in any way that would interfere with the smooth operation of the mechanism or even slightly alienate the power brokers. Zuckerberg is pleading with Congress to pass regulatory legislation because he is now confident that he is ready to play the game of regulatory capture and will be better at it than most of his competitors.
You might not expect that giving the Facebook app on your Android phone permission to read your contact list would also allow Facebook to transcribe all the metadata from all the calls and text messages in your phone's entire call history. But it did, at least until Google deprecated version 4.0 of the Android API — which was about five months ago.
“Facebook Scraped Call, Text Message Data for Years from Android Phones”
Sean Gallegher, Ars Technica, March 24, 2018
“My Cow Game Extracted Your Facebook Data”
Ian Bogost, The Atlantic, March 22, 2018
Facebook has vowed to audit companies that have collected, shared, or sold large volumes of data in violation of its policy, but the company cannot close the Pandora's box it opened a decade ago, when it first allowed external apps to collect Facebook user data. That information is now in the hands of thousands, maybe millions of people.
An Oxford lecturer in international development prescribes what needs to be done in order to restore privacy to Internet users.
“‘Cambridge Analytica’: Surveillance Is the DNA of the Platform Economy”
Ivan Manokha, Open Democracy, March 23, 2018
The current social mobilization against Facebook resembles the actions of activists who, in opposition to neoliberal globalization, smash a McDonald's window during a demonstration.
What we need is a total redefinition of the right to privacy (which was codified as a universal human right in 1948, long before the Internet), to guarantee its respect, both offline and online.
What we need is a body of international law that will provide regulations and oversight for the collection and use of data.
What is required is an explicit and concise formulation of terms and conditions which, in a few sentences, will specify how users' data will be used.
It is important to seize the opportunity presented by the Cambridge Analytica scandal to push for these more fundamental changes.
But the Cambridge Analytica scandal provides no such opportunity. The Snowden revelations (2013) were the last, best opportunity, and at that time we looked at the facts and decided not to do anything about them. The current reaction to Cambridge Analytica is just some extremely faint and transient buyer's remorse, amplified by a few politicians who assumed for years that their opponents didn't understand technology well enough to turn it to their advantage.
We're starting to see a new genre of advice columns, featuring instructions on how to use some piece of modern technology safely, given of course that it's impossible to really use it safely, since the users' understanding of how it works and what they want to accomplish with it is flatly incompatible with its design and with the business model of its maker and licensor.
The journalists who write in this genre are people who know better than to try to use the technology, but use it anyway because their jobs require it and because they know that their readers are going to use it as well, even those who also know better than to try.
“The Motherboard Guide to Using Facebook Safely”
Lorenzo Franceschi-Bicchierai, Motherboard, March 21, 2018
You can't really stop all collection. In fact, even if you leave Facebook (or have never been part of the social network), the company is still gathering data on you and building a shadow profile in case you ever join. …
Facebook's entire existence is predicated on tracking and collecting information about you. If that concept makes you feel creeped out, then perhaps you should quit it. But if you are willing to trade that off for using a free service to connect with friends, there's still some steps you can take to limit your exposure.
It is now becoming commonplace for security offices at colleges and universities to monitor the social-media accounts of members of the College community for potential threats, crimes, and miscellaneous troublemaking. Sometimes they outsource the work to specialist companies (such as Social Sentinel, which curates and customizes a list of several thousand words whose appearance in posts can trigger investigations).
“Big Brother: College Edition”
Jeremy Bauer-Wolf, Inside Higher Ed, December 21, 2017
“Social Media Monitoring: Beneficial or Big Brother?”
Amy Rock, Campus Safety Magazine, March 12, 2018
“University Police Surveil Student Social Media in Attempt to Make Campus Safer”
Ryne Weiss, Foundation for Individual Rights in Education, March 16, 2018
Put yourself in the shoes of a student on campus. What would you do if you're aware that anything you post may be flagged by the school administration or police for containing one of the keywords in Social Sentinel's library of harm? Do you make the decision to tweet less? Do you restrict your posts to friends only? It seems hard to imagine how you could moderate your tweets to avoid thousands of words when you have no idea what they are.
And assume you do get flagged and questioned by police. Many people would probably change their behavior. And while people might want to be mindful of what they post publicly online, fear of police and their school monitoring them and misinterpreting their messages shouldn't be something students have to navigate. …
The free exchange of ideas on campus is an invaluable and irreplaceable part of the ideal college experience, and the chilling effect of student social media surveillance actively undermines that.
“Hackers Can Use Cortana to Open Websites on Windows 10 Even If Your PC Is Locked”
Tristan Greene, The Next Web, March 7, 2018
A pair of independent researchers yesterday uncovered a particularly worrisome security vulnerability in Microsoft's Windows 10. If your PC's OS was installed with default settings this could affect you.
The simple “hack” involves activating Cortana via voice command to open websites on a PC that's been locked.
Well, duh. This was completely obvious from the beginning to any Windows 10 user who glanced at the page describing the settings for Cortana. One of the options is “Use Cortana even when my device is locked.” Microsoft turned this on by default because it wants to listen in on Windows 10 users even when the users try to lock their PCs. The “researchers” “uncovered” this feature by noticing that it was there and trying it out. This scarcely qualifies as a “hack,” or even as a “‘hack.’”
It seems unlikely that Microsoft will regard this routine surveillance feature as “worrisome.” From the user's point of view, it is of course a gigantic security hole. Since the user doesn't own Windows, however, that point of view is essentially irrelevant. The real owner, Microsoft, has already expressed its point of view by creating the feature and making sure that it's on by default. That's the end of the story.
“Secret Surveillance and the Legacy of Torture Have Paralyzed the USS Cole Bombing Trial at Guantánamo”
Shilpa Jindia, The Intercept, March 5, 2018
Last month, a judge a Guantánamo Bay suspended indefinitely the trial of Abd al-Rahim al-Nashiri, paralyzing one of the most high-profile cases to go before the island prison's military commissions system. The February 16 decision ended a monthslong standoff with defense lawyers who claimed that they could not do their work for fear of government surveillance. …
Nashiri's entire civilian defense team resigned last October, citing an irresolvable ethical conflict: They did not believe that they could meet with their client and work on the case without being spied on by U.S. government agencies. Because of the byzantine rules governing classified materials at Guantánamo, they lawyers still can't explain exactly why they believe this to be the case to the public or to their client.
The reason that the lawyers believe that they can't meet their client without being spied on is that in June they received a memo from the military supervisor of all of the Guantánamo detainees' legal teams, Brigadier General John Baker, saying that he could no longer assure them that they could meet their client anywhere inside the concentration camp without being miked, monitored, and recorded. The lawyers did some checking and confirmed Baker's suspicions, though they can't say how because it's classified.
An essay by a public intellectual reflecting on the value of privacy and pointing out that many people prefer it to constant social interaction. This retrospective view, bordering on denialism, is surely one of the last expressions of the values that prevailed in the era before total and inevitable surveillance.
“Luxuriating in Privacy”
Sarah Perry, ribbonfarm, March 1, 2018
Privacy is wonderful in and of itself, and privacy keeps the peace.
Yes. And its disappearance is a reflection of the prevalence of total war.
A fuller description of the nature and use of social-credit scores in China:
“China's Dystopian Tech Could Be Contagious”
Adam Greenfield, The Atlantic, February 14, 2018
Every Chinese citizen receives a literal, numeric index of their trustworthiness and virtue, and this index unlocks, well, everything. … This one number will determine the opportunities citizens are offered, the freedoms they enjoy, and the privileges they are granted.
This end-to-end grid of social control is still in its prototype stages, but three things are already becoming clear: First, where it has actually been deployed, it has teeth. Second, it has profound implications for the texture of urban life. And finally, there's nothing so distinctly Chinese about it that it couldn't be rolled out anywhere else the right conditions obtain. The advent of social credit portends changes both dramatic and consequential for life in cities everywhere — including the one you might call home.
My guess is that something like this will is coming soon to the United States. The infrastructure is already mostly in place. Extrapolating from the current state of affairs, I'd speculate that the first use of social-credit scores in the U.S. will be to manage access to posting on Facebook and Twitter. It would be one of the easier ways to exclude Russian bots and even (after a few months of data collection) Russian identity thieves. After that, new categories of doubleplusungood propaganda will really begin to proliferate, and soon social media will be safely under the control of the established elites, plus a few elderly cat fanciers and cupcake decorators who are innocuous enough to retain the privilege of posting.
It is now common practice for anyone who has a government job and claims to be enforcing the law to use whatever surveillance technology is available to collect data about anyone and everyone. A new bill in Congress would institutionalize this practice (and legitimize it, if it were constitutional, which it is not).
“The CLOUD Act: A Dangerous Expansion of Police Snooping on Cross-Border Data”
Camille Fischer, Deeplinks, Electronic Frontier Foundation, February 8, 2018
The bill creates an explicit provision for U.S. law enforcement … to access “the contents of a wire or electronic communication and any record or any other information” about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider — like Google, Facebook, or Snapchat — to hand over a user's content and metadata, even if it is stored in a foreign country, without following that foreign country's privacy laws.
Second, the bill would allow the President to enter into “executive agreements” with foreign governments that would allow each government to acquire user's data stored in the other country, without following each other's privacy laws.
The Electronic Frontier Foundation sued the government to obtain the opinions of the Foreign Intelligence Surveillance Court on the requests for (unconstitutional) general warrants against American citizens under section 702 of the Foreign Intelligence Surveillance Act, which notionally authorizes the court to issue specific warrants against non-citizens.
Last week, the FISC released about a third of the opinions that the EFF requested, in heavily redacted form. They show that government agencies, seeking the court's approval for warrantless mass surveillance, also tried repeatedly to sneak in language that would have established even wider collection parameters and even longer data-retention policies. Predictably, the insensate demands for ever more intensive surveillance eventually exceed any prescribed bounds, however weak.
“Newly Released Surveillance Orders Show That Even with Individualized Court Oversight, Spying Powers are Misused”
Aaron Mackey and Andrew Crocker, Deeplinks, Electronic Frontier Foundation, February 7, 2018
Over a period between 15 months and three years, the NSA obtained [without any court authorization] a number of communications of U.S. persons. The precise number of communications is redacted.
Rather than notifying the court that it had destroyed the communications it obtained without authorization, the NSA made an absurd argument in a bid to retain the communications: because the surveillance was unauthorized, the agency's internal procedures that require officials to delete non-relevant communications should not apply. Essentially, because the surveillance was unlawful, the law shouldn't apply and the NSA should get to keep what it had obtained.
The court rejected the NSA's argument. “One would expect the procedures' restrictions on retaining and disseminating U.S. person information to apply most fully to such communications, not, as the government would have it, to fail to apply at all,” the court wrote.
The court went on to day that “[t]here is no persuasive reason to give the [procedures] the paradoxical and self-defeating interpretation advanced by the government.”
The court then ordered the NSA to destroy the communications it had obtained without FISC authorization. … Rather than immediately complying with the order, the NSA asked the FISC once more to allow it to keep the communications.
Again the court rejected the government's arguments. “No lawful benefit can plausibly result from retaining this information, but further violation of law could ensue,” the court wrote. The court then ordered the NSA to not only delete the data, but to provide reports on the status of its destruction “until such time as the destruction process has been completed.”
That was in May 2011. Whether the NSA ever destroyed the data in question, whether it ever filed any of the required reports, and whether any further violations of law have ensued are all secrets. None of the inside parties has chosen to release the answers. Perhaps further lawsuits will yield some information.
Software tools for searching immense quantities of surveillance data are increasingly relying on black-box deciders to extract and summarize search results.
“Artificial Intelligence Is Going to Supercharge Surveillance”
James Vincent, The Verge, January 23, 2018
For experts in surveillance and AI, the introduction of these sorts of capabilities is fraught with potential difficulties, both technical and ethical. And, as is often the case in AI, these two categories are intertwined. It's a technical problem that machines can't understand the world as well as humans do, but it becomes an ethical one when we assume the can and let them make decisions for us. …
Even if we manage to fix the biases in these automated systems, that doesn't make them benign, says ACLU policy analyst Jay Stanley. He says that changing CCTV cameras from passive into active observers could have a huge chilling effect on civil society.
“We want people to not just be free, but to feel free. And that means that they don't have to worry about how an unknown, unseen audience may be interpreting or misinterpreting their every movement and utterance,” says Stanley. “The concern is that people will begin to monitor themselves constantly, worrying that everything they do will be misinterpreted and bring down negative consequences on their life.”
An overview of technology patents for which Facebook has applied, with many imaginative ways for the company to add to your dossier and fill in details of your social graph:
“Facebook Knows How to Track You Using the Dust on Your Camera Lens”
Kashmir Hill and Surya Mattu, Gizmodo, January 11, 2018
Facebook claims that they aren't currently using this tactic. Uh huh.
One filed in 2015 describes a technique that would connect two people through the camera metadata associated with the photos they uploaded. It might assume two people knew each other if the images they uploaded looked like they were titled in the same series of photos — IMG_4605739.jpg and IMG_4605742.jpg, for example — or if lens scratches or dust were detectable in the same spots on the photos, revealing the photos were taken by the camera.
It would result in all the people you've sent photos to, who then uploaded them to Facebook, showing up in one another's “People You May Know.” It's be a great way to meet the other people who hired your wedding photographer.
“India Will Install Cameras in Classrooms amid a Rise of Surveillance Measures in Asia”
Rosie Perper, Business Insider, January 21, 2018
“The Senate Just Voted to Expand the Warrantless Surveillance of US Citizens”
Daniel Oberhaus, Motherboard, January 18, 2018
On Thursday afternoon, the US Senate voted in favor of the FISA Amendments Reauthorization Act of 2017, a bill that will expand the warrantless surveillance of US citizens. The bill passed by a vote of 65–34, with 43 Republicans and 21 Democrats voting in its favor.
The bill will now go to the White House to be signed into law by President Trump. It reauthorizes FISA Section 702 until 2024.
Among the prominent Democratic senators voting in favor of this patently unconstitutional bill were Tammy Duckworth, Diane Feinstein, Tim Kaine, Amy Klobuchar, Chuck Schumer, Jeanne Shaheen, and Debbie Stabenow. Nice work, fools.
“Congress Demanded NSA Spying Reform. Instead, They Let You Down”
Zack Whittaker, Zero Day, January 18, 2018
The Electronic Frontier Foundation responded by renewing their determination to pursue lawsuits against warrantless surveillance of Americans notionally justified by section 702 of FISA and to promote and support the development of strong encryption and other protocols and tools to ensure the privacy of documents and communications.
“An Open Letter to Our Community on Congress's Vote to Extend NSA Spying from EFF Executive Director Cindy Cohn”
Cindy Cohn, Deeplinks, Electronic Frontier Foundation, January 18, 2018
We offer this response to the National Security Agency and its allies in Congress: enjoy it while you can because it won't last.
Today's Congressional failure redoubles our commitment to seek justice through the courts and through the development and spread of technology that protects our privacy and security. …
We aim to bring mass surveillance to the Supreme Court. By showcasing the unconstitutionality of the NSA's collect-it-all approach to tapping the Internet, we'll seek to end the dragnet surveillance of millions of innocent people. We know that the wheels of justice turn slowly, especially when it comes to impact litigation against the NSA, but we're in this for the long run.