Glider from the game of Life, rising from the left

Unity

Archives

Blogroll

Topic: #Facebook

Safebook

2018-09-20⊺06:43:01-05:00

A great new way to use Facebook!

“Safebook”
Benjamin Grosser, September 19, 2018
https://bengrosser.com/projects/safebook

Safebook is a browser extension, for Chrome or Firefox, that suppresses all text, images, video, and audio content on the Facebook site, leaving intact the borders around and between panels, the (now blank) menus, drop-down submenus, pop-up windows, and other navigation elements.

#Facebook #user-interfaces #humor

Facebook Purges Leftist Media Company

2018-08-15⊺08:39:16-05:00

“teleSUR English Removed from Facebook for Second Time”
teleSUR English, August 14, 2018
https://www.telesurtv.net/english/news/TeleSUR-English-Removed-From-Facebook-for-the-Second-Time-20180813-0009.html

“‘Deeply Disturbing’: For Second Time This Year, Facebook Suspends Left-Leaning teleSUR English without Explanation”
Jessica Corbett, Common Dreams, August 14, 2018
https://www.commondreams.org/news/2018/08/14/deeply-disturbing-second-time-year-facebook-suspends-left-leaning-telesur-english

Just another ratchet click, advancing a policy that has already been established for some time:

“Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments”
Glenn Greenwald, The Intercept, December 30, 2017
https://theintercept.com/2017/12/30/facebook-says-it-is-deleting-accounts-at-the-direction-of-the-u-s-and-israeli-governments/

It's not surprising that Facebook finds straightforward reports of events that happen in the world to be “hateful, threatening or obscene.” I often feel that way myself. The difference is that it's not my policy to keep other people from finding out things that I already know.

#Facebook #social-media #news-suppression

Facebook and the Problem of Free Will

2018-04-13⊺22:36:34-05:00

OK, just one more post about Facebook, and then I'm swearing off for at least two weeks.

One of the problems with knowledge claims about future events is that the causal chains that lead to those events often include decisions that people haven't made yet, decisions that in turn depend on the outcomes of contingent events that haven't yet occurred. Facebook is offering a new product that gets around this epistemological difficulty by waving crystalline neural networks at it.

“Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document”
Sam Biddle, The Intercept, April 13, 2018
https://theintercept.com/2018/04/13/facebook-advertising-data-artificial-intelligence-ai/

Instead of merely offering advertisers the ability to target people based on demographics and consumer preferences, Facebook instead offers the ability to target them based on how they will behave, what they will buy, and what they will think. These capabilities are the fruits of a self-improving, artificial intelligence-powered prediction engine, first unveiled by Facebook in 2016 and dubbed “FBLearner Flow.”

One slide in the document touts Facebook's ability to “predict future behavior,” allowing companies to target people on the basis of decisions they haven't even made yet. This would, potentially, give third parties the opportunity to alter a consumer's anticipated course. …

[Law professor Frank Pasquale] told The Intercept that Facebook's behavioral prediction work is “eerie” and worried how the company could turn algorithmic predictions into “self-fulfilling prophecies,” since “once they've made this prediction they have a financial interest in making it true.” That is, once Facebook tells an advertising partner you're going to do some thing or other next month, the onus is on Facebook to either make that event come to pass, or show that they were able to help effectively prevent it (how Facebook can verify to a marketer that it was indeed able to change the future is unclear).

Of course, such a prediction system can't operate transparently. If there is any way for targets to become aware of the predictions that are made about their future behavior, the predictions themselves enter the causal chain that result in the future decisions, thus undermining the basis for the predictions. To take the simplest and most extreme case, what happens if a Facebook user resolves to do the opposite of whatever FBLearner Flow predicts?

It occurs to me that the perfect use for this tool would be to predict which companies' advertising managers are gullible enough to be deceived by this hokum and which ones will decide to spend their advertising budgets in less carnivalesque ways. Then Facebook could perhaps develop a slicker pitch to alter the anticipated course of the second group of marks.

#Facebook #black-box-deciders #prediction-systems

Facebook Defends Its Data

2018-04-05⊺10:27:45-05:00

And here we have it:

While Zuckerberg claimed that major transparency efforts are on the company's horizon, he seemed dismissive of users' concerns about their privacy. The recent movement to #DeleteFacebook, he said, had “no meaningful impact” on the company or Facebook usage.

“Facebook knows so much about you,” he added, “because you chose to share it with your friends and put it on your profile.”

“Mark Zuckerberg: ‘It Was My Mistake’ Facebook Compromised Data of 87 Million Users”
Sarah Emerson, Motherboard, April 4, 2018
https://motherboard.vice.com/en_us/article/7xdw99/mark-zuckerberg-it-was-my-mistake-facebook-compromised-data-of-87-million-users

Facebook's actions and policy changes are about tightening up their control over access to the dossiers that Facebook compiles, which are now the company's intellectual property and primary business asset. Facebook has zero interest in their users' so-called “concerns about their privacy” and is not impressed by the feeble attempts of a few rabble-rousers to impede the juggernaut.

Just to drive home the point, Facebook's Chief Technology Officer recently conceded in the company blog that “malicious actors” have acquired “most” Facebook users' profile information. (Naturally, he buries the lede in the seventh paragraph of the post.)

“An Update on Our Plans to Restrict Data Access on Facebook”
Mike Schroepfer, Facebook Newsroom, April 4, 2018
https://newsroom.fb.com/news/2018/04/restricting-data-access/

Until today, people could enter another person's phone number or email address into Facebook search to help find them. … However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we've seen, we believe most people on Facebook could have had their public profile scraped in this way.

#Facebook #intellectual-property #privacy

Google and Facebook Dossiers

2018-04-01⊺08:52:48-05:00

An active user of Internet services decided to take advantage of offers by Google and Facebook to provide him with copies of his dossier. They were much more comprehensive and diverse in their sources than he expected. Not surprisingly, they included a lot of files, photographs, and e-mail messages that he “deleted,” including, for instance, his PGP private key.

His dossier at Google ran to 5.5 gigabytes, and the one that Facebook compiled was 600 megabytes.

“Are You Ready? Here Is All the Data Facebook and Google Have on You”
Dylan Curran, The Guardian, March 30, 2018
https://www.theguardian.com/commentisfree/2018/mar/28/all-the-data-facebook-google-has-on-you-privacy

#surveillance #Google #Facebook

Non-Remedies for Surveillance Capitalism

2018-03-29⊺17:17:45-05:00

Bruce Schneier provides a nice overview of the mechanics of surveillance capitalism and expresses the hope that government regulation will bring it under control eventually, even though he doesn't expect Congress to produce any such regulation “anytime soon.”

“It's Not Just Facebook. Thousands of Companies Are Spying On You”
Bruce Schneier, CNN.com, March 26, 2018
https://edition.cnn.com/2018/03/26/opinions/data-company-spying-opinion-schneier/index.html

Schneier also offers another solution, which likewise strikes me as wishful thinking:

One of the responses to the Cambridge Analytica scandal is that people are deleting their Facebook accounts. It's hard to do right, and doesn't do anything about the data that Facebook collects about people who don't use Facebook. But it's a start. The market can put pressure on these companies to reduce their spying on us, but it can only do that if we force the industry out of its secret shadows.

Schneier advances this idea so diffidently and undercuts it so thoroughly with his qualifications that I find it difficult to take this passage seriously. #DeleteFacebook has become a meme, and that's a vaguely hopeful sign, but the account deleters are not going to exert any significant market pressure unless they become at least as numerous as the thousands of new users who join Facebook every day.

#surveillance-capitalism #Facebook #Bruce-Schneier

Academic Research as Money and Data Laundering

2018-03-28⊺10:50:27-05:00

Inside Higher Ed ran an opinion piece today complaining about Facebook's attempt to shift the blame for the unlicensed transfer of personal data about its users onto the Cambridge University senior research associate who nominally made the agreement with Facebook:

“Facebook's Professor Problem”
Mark Bartholomew, Inside Higher Ed, March 28, 2018
https://www.insidehighered.com/views/2018/03/28/facebook-using-academic-pedigrees-whitewash-unethical-practices-opinion

The best practices of academia need to find more purchase at Facebook. For studies on humans, it is necessary in the university setting to obtain informed consent. As a private business, Facebook is not obligated to comply with this standard, and it doesn't. Instead, it need only make sure that the terms of any potential human experimentation are covered under its capacious and unreadable terms of service.

By contrast, in the realm of academic research, scientists cannot wave a bunch of impenetrable legalese under a test subject's nose and receive a blank check to do what they want. Moreover, university internal review boards act as a safeguard, making sure that even when consent is informed, the benefits of any proposed research outweigh their costs to the participants. University IRBs need to make sure they fulfill their responsibilities when it comes to experimenting on social media users.

More importantly, it is time that Facebook starts following academics' best practices rather than use them for cover.

Although Bartholomew identifies a significant ethical failure on Facebook's part, that particular failure isn't the one at the heart of the current controversy and doesn't fully explain what the academic involved did wrong. Aleksandr Kogan's principal ethical offense was his participation in a money- and data-laundering scheme. He received money from Cambridge Analytica and used it to pay participants in his mostly fake research project, the users of his personality-quiz app, in exchange for which they gave Kogan full access to their Facebook profiles and those of their “Facebook friends.” Kogan collected the data and passed it back to Cambridge Analytica. He provided the cover, the false front, for what was basically Cambridge Analytica's straightforward purchase of parts of Facebook's dossiers on some of their users.

Neither Cambridge Analytica nor Facebook wanted to acknowledge publicly that the purpose of the project was to improve the targeting of political propaganda to gullible American Facebook users. To conceal this purpose, Cambridge Analytica concocted the cover story and hired Kogan to implement it.

Kogan claims that he didn't know anything about what Cambridge Analytica was doing with the data he shared with them but simply felt that they were entitled to use that data however they liked, since they had paid for it. But I doubt he's that stupid.

#Facebook #Cambridge-Analytica #data-laundering

Facebook Dossiers Are Quite Comprehensive

2018-03-26⊺11:27:17-05:00

You might not expect that giving the Facebook app on your Android phone permission to read your contact list would also allow Facebook to transcribe all the metadata from all the calls and text messages in your phone's entire call history. But it did, at least until Google deprecated version 4.0 of the Android API — which was about five months ago.

“Facebook Scraped Call, Text Message Data for Years from Android Phones”
Sean Gallegher, Ars Technica, March 24, 2018
https://arstechnica.com/information-technology/2018/03/facebook-scraped-call-text-message-data-for-years-from-android-phones

#Facebook #surveillance #Android

When Facebook Became a Platform

2018-03-25⊺09:13:42-05:00

The developer of an early Facebook app (“Cow Clicker”) describes what the platform Facebook offered looked like back then (2007–2010) and how it promoted its surveillance services to developers.

“My Cow Game Extracted Your Facebook Data”
Ian Bogost, The Atlantic, March 22, 2018
https://www.theatlantic.com/technology/archive/2018/03/my-cow-game-extracted-your-facebook-data/556214/

Facebook has vowed to audit companies that have collected, shared, or sold large volumes of data in violation of its policy, but the company cannot close the Pandora's box it opened a decade ago, when it first allowed external apps to collect Facebook user data. That information is now in the hands of thousands, maybe millions of people.

#Facebook #surveillance #platforms

Academic Advocates Ponies for Everyone

2018-03-24⊺09:03:13-05:00

An Oxford lecturer in international development prescribes what needs to be done in order to restore privacy to Internet users.

“‘Cambridge Analytica’: Surveillance Is the DNA of the Platform Economy”
Ivan Manokha, Open Democracy, March 23, 2018
https://www.opendemocracy.net/digitaliberties/ivan-manokha/cambridge-analytica-surveillance-is-dna-of-platform-economy

The current social mobilization against Facebook resembles the actions of activists who, in opposition to neoliberal globalization, smash a McDonald's window during a demonstration.

What we need is a total redefinition of the right to privacy (which was codified as a universal human right in 1948, long before the Internet), to guarantee its respect, both offline and online.

What we need is a body of international law that will provide regulations and oversight for the collection and use of data.

What is required is an explicit and concise formulation of terms and conditions which, in a few sentences, will specify how users' data will be used.

It is important to seize the opportunity presented by the Cambridge Analytica scandal to push for these more fundamental changes.

But the Cambridge Analytica scandal provides no such opportunity. The Snowden revelations (2013) were the last, best opportunity, and at that time we looked at the facts and decided not to do anything about them. The current reaction to Cambridge Analytica is just some extremely faint and transient buyer's remorse, amplified by a few politicians who assumed for years that their opponents didn't understand technology well enough to turn it to their advantage.

#Facebook #surveillance #privacy

Personal Data Collected Through Facebook Leaks to Exploiters

2018-03-21⊺17:02:23-05:00

Surveillance is essential to Facebook's business model. It collects and compiles enormous amounts of personal data on its users (and non-users), and it sells to its customers — advertisers, academics, political operatives, and others — the privilege of creating applications that collect and compile still more personal data.

In theory, Facebook doesn't actually sell its dossiers to its customers. It only licenses the data, or the right to collect data, retaining control over any further dissemination so as to maintain its ownership of its most valuable intellectual property. In practice, Facebook has no effective means of preventing its customers from copying and distributing any data they have legitimately obtained. The licenses that it relies on turn out to be quite difficult to enforce.

In 2014, a senior research associate at Cambridge University, Aleksandr Kogan, wrote a Facebook app called “thisismydigitallife.” Superficially, it was a personality quiz, but the people who signed up to take it gave Kogan permission to access their Facebook profiles and the Facebook profiles of the people they had friended. Facebook approved this arrangement but stipulated that the data that Kogan collected be used solely for the purpose of academic research.

Kogan agreed to this stipulation and proceeded to collect millions of Facebook profiles through the app. Instead of mining the data at Cambridge, however, he set up a company called Global Science Research and carried out his supposedly academic research there. Global Science Research had a million-dollar contract with another company, SCL Group. One of SCL's subsidiaries, SCL Elections, had recently secured funding to set up a new corporation, Cambridge Analytica, to explore the use of data-mining techniques to find reliable correlations between the personalities and “likes” of individual Facebook users on one hand and their political views and behaviors on the other. Because Kogan's research was funded, at least in part, by Cambridge Analytica, he apparently saw nothing wrong with sharing with his employers the data on which his research was based.

It's quite possible that sharing this data with a commercial enterprise violated Kogan's understanding with Facebook. It may also be a violation of UK data-protection laws, because Kogan asked the people who used his app only for their permission to collect and study their personal data, not for permission to share it with (or sell it to) third parties.

However, the only thing that prevented Cambridge Analytica from obtaining the same data directly from Facebook is that the license would probably have cost them much more money. Nothing in Facebook's notoriously lax, mutable, and labyrinthine privacy policies would have obstructed such a transaction if the price was right. Facebook's dossiers are their principal product, and selling access to them is their principal source of revenue.

Facebook now claims that Kogan and Cambridge Analytica have violated its terms of service and has closed their Facebook accounts. Lawsuits and threats of lawsuits are now flying in all directions, and some members of Congress are threatening to launch terrifying inquisitions into the monstrous abuse of the American electoral process that Cambridge Analytica supposedly perpetrated with the assistance of Kogan's data. However, there are now so many unlicensed copies of the data that there is no way to ensure that all of them will ever be erased, or even located. Now that arbitrarily large amounts of data can be copied quickly and inexpensively, and now that multiple backups of valuable data are the norm, the idea of restricting the distribution of data through licensing is a non-starter. It can't possibly work.

There's another reason why the lawsuits and the fulminations of member of Congress are idle, from the point of view of ordinary Facebook users (and non-users): Surveillance is essential to Facebook's business model. If Facebook stopped collecting and compiling personal data and erased its current stores, it would quickly go bankrupt. But once the dossiers exist, it is inevitable that they will be copied and disseminated, and once they are copied and disseminated, it is impossible ever to recover and destroy all of the copies, data-protection and privacy laws notwithstanding.

Instead (as Mark Zuckerberg's Facebook post on this subject makes clear), Facebook will continue to build up massive dossiers as fast as it can and will continue to use the information in those dossiers as it sees fit. The steps that Zuckerberg describes as “protecting users' data” are all designed to protect Facebook's proprietary interest in everyone's personal data, to prevent or at least obstruct the propagation of the dossiers to unworthy outsiders.

“Suspending Cambridge Analytica and SCL Group from Facebook”
Paul Grewal, Facebook Newsroom, March 16, 2018
https://newsroom.fb.com/news/2018/03/suspending-cambridge-analytica/

“How Trump Consultants Exploited the Facebook Data of Millions”
Matthew Rosenberg, Nicholas Confessore, and Carole Cadwalladr, The New York Times, March 17, 2018
https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html

“Cambridge Analytica Responds to Facebook Announcement”
Cambridge Analytica, March 17, 2018
https://www.prnewswire.com/news-releases/cambridge-analytica-responds-to-facebook-announcement-300615626.html

“‘I Made Steve Bannon's Psychological Warfare Tool’: Meet the Data War Whistleblower”
Carole Cadwalladr, The Guardian, March 18, 2018
https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump

“Cambridge Analytica's Ad Targeting Is the Reason Facebook Exists”
Jason Koebler, Motherboard, March 19, 2018
https://motherboard.vice.com/en_us/article/vbxgzb/cambridge-analytica-facebook-ad-targeting-third-party-apps

Though Cambridge Analytica's specific use of user data to help a political campaign is something we haven't publicly seen on this scale before, it is exactly the type of use that Facebook's platform is designed for, has facilitated for years, and continues to facilitate every day. At its core, Facebook is an advertising platform that makes almost all of its money because it and the companies that use its platform know so much about you.

Facebook continues to be a financially successful company precisely because its platform has enabled the types of person-specific targeting that Cambridge Analytica did. …

“The incentive is to extract every iota of value out of users,” Hartzog [Woodrow Hartzog, Professor of Law and Computer Science at Northeastern University] said. “The service is built around those incentives. You have to convince people to share as much information as possible so you click on as many ads as possible and then feel good about doing it. This is the operating ethos for the entire social internet.”

“Facebook's Surveillance Machine”
Zeynep Tufekci, The New York Times, March 19, 2018
https://www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html

Billions of dollars are being made at the expense of our public sphere and our politics, and crucial decisions are being made unilaterally, and without recourse or accountability.

“Then Why Is Anyone Still on Facebook?”
Wolf Richter, Wolf Street, March 20, 2018
https://wolfstreet.com/2018/03/20/then-why-is-anyone-still-on-facebook/

So now there's a hue and cry in the media about Facebook, put together by reporters who are still active on Facebook and who have no intention of quitting Facebook. There has been no panicked rush to “delete” accounts. There has been no massive movement to quit Facebook forever. Facebook does what it does because it does it, and because it's so powerful that it can do it. A whole ecosystem around it depends on the consumer data it collects. …

Yes, there will be the usual ceremonies … CEO Zuckerberg may get to address the Judiciary Committee in Congress. The questions thrown at him for public consumption will be pointed. But behind the scenes, away from the cameras, there will be the usual backslapping between lawmakers and corporations. Publicly, there will be some wrist-slapping and some lawsuits, and all this will be settled and squared away in due time. Life will go on. Facebook will continue to collect the data because consumers continue to surrender their data to Facebook voluntarily. And third parties will continue to have access to this data. …

People who are still active on Facebook cannot be helped. They should just enjoy the benefits of having their lives exposed to the world and serving as a worthy tool and resource for corporate interests, political shenanigans, election manipulators, jealous exes, and other facts of life.

“Facebook Sued by Investors over Voter-Profile Harvesting”
Christie Smythe and Kartikay Mehrotra, Bloomberg Technology, March 20, 2018
https://www.bloomberg.com/news/articles/2018-03-20/facebook-sued-by-investors-over-voter-profile-harvesting

“The Researcher Who Gave Cambridge Analytica Facebook Data on 50 Million Americans Thought It Was ‘Totally Normal’”
Kaleigh Rogers, Motherboard, March 21, 2018
https://motherboard.vice.com/en_us/article/ywxgeg/cambridge-analytica-researcher-interview

Kogan said he was under the impression that what he was doing was completely normal.

“What was communicated to me strongly was that thousands and maybe tens of thousands of apps were doing the exact same thing and that this was a pretty normal use case and a normal situation for usage of Facebook data,” Kogan said.

“Facebook's Mark Zuckerberg Vows to Bolster Privacy amid Cambridge Analytica Crisis”
Sheera Frenkel and Kevin Roose, The New York Times, March 21, 2018
https://www.nytimes.com/2018/03/21/technology/facebook-zuckerberg-data-privacy.html

#Facebook #Cambridge-Analytica #data-mining #data-sharing

Universities Submit to Facebook's Surveillance Capitalism

2018-02-08⊺14:30:12-06:00

Silence, peasants! Resistance is futile!

“Please: Let's Be Real about Facebook”
Michael Stoner, Inside Higher Ed, February 8, 2018
https://www.insidehighered.com/blogs/call-action-marketing-and-communications-higher-education/please-let’s-be-real-about-facebook

Let me repeat: it gets results. for that reason — and because so many people use Facebook — it's become integral to higher ed marketing, communications, and advancement strategies. …

Let's agree that the only recourse we have is to get used to having our attention sold or stop using these services. But let's not be shocked that Facebook is doing exactly what it's designed to do.

#surveillance-capitalism #Facebook #marketing-higher-education

Facebook Cares about the Dust on Your Camera Lens

2018-01-26⊺10:18:44-06:00

An overview of technology patents for which Facebook has applied, with many imaginative ways for the company to add to your dossier and fill in details of your social graph:

“Facebook Knows How to Track You Using the Dust on Your Camera Lens”
Kashmir Hill and Surya Mattu, Gizmodo, January 11, 2018
https://gizmodo.com/facebook-knows-how-to-track-you-using-the-dust-on-your-1821030620

One filed in 2015 describes a technique that would connect two people through the camera metadata associated with the photos they uploaded. It might assume two people knew each other if the images they uploaded looked like they were titled in the same series of photos — IMG_4605739.jpg and IMG_4605742.jpg, for example — or if lens scratches or dust were detectable in the same spots on the photos, revealing the photos were taken by the camera.

It would result in all the people you've sent photos to, who then uploaded them to Facebook, showing up in one another's “People You May Know.” It's be a great way to meet the other people who hired your wedding photographer.

Facebook claims that they aren't currently using this tactic. Uh huh.

#Facebook #surveillance #metadata

Hashtag index

This work is licensed under a Creative Commons Attribution-ShareAlike License.

Atom feed

John David Stone (havgl@unity.homelinux.net)

created June 1, 2014 · last revised December 10, 2018