More on this book
Community
Kindle Notes & Highlights
Read between
August 20 - September 14, 2017
This combination of legal shelter for “report and takedown” and dramatically lower costs means that the model of a tiny employee base compared with the number of users, and indifference to terms-of-service violations of ordinary users, is common among software companies.
Just the verification process may endanger their lives, depending on the severity of the repression in the country. I have seen this happen repeatedly but will not list examples—
Ironically, implementing these slight modifications to the real-name policy may have taken some of the heat off Facebook because LGBTQ communities in Western nations, those in the best position to make noise about their plight, have found ways to work with the company, but non-Western activists and affected communities elsewhere around the world, who have a lot less power vis-à-vis Facebook, continue to suffer.
Many of these companies are quite young and are run by founders who own large amount of stock. Therefore, the role of individual ideology is greater than it is in an established, traditional company that is fully securitized and subject only to Wall Street considerations.
Somewhat unsurprisingly, social media platforms, which are corporate entities, are far more concerned about intellectual property rights that corporations care most about, and where they have more legal remedies, than about individual privacy or political agency.25
Governments, too, have increasingly learned to use these mechanisms to silence dissent. They can encourage or even pay crowds to purposefully “report” dissidents to get them banned or at least make them struggle to stay on a platform. In these cases, the authorities count on taking advantage of the thinly staffed, clunky enforcement mechanisms of major platforms. Officials can also directly pressure the companies.
After the human rights backlash, Yahoo’s CEO apologized to Shi’s family. Still, the damage was done.
Major platforms could do a lot better by investing resources and giving more attention to the issue, but that their business model, their openness to government pressure, and sometimes their own mindset, often works against this.
Perhaps the most important such algorithm for social movements is the one Facebook uses which sorts, prioritizes, and filters everyone’s “news feed” according to criteria the company decides.
Algorithms can also shape social movement tactics as a movement’s content producers adapt or transform their messages to be more algorithm friendly.
On Twitter, among about a thousand people around the world that I follow, and which was still sorted chronologically at the time, the topic became dominant. Many people were wondering what was going on in Ferguson—even people from other countries were commenting. On Facebook’s algorithmically controlled news feed, however, it was as if nothing had happened.33 I wondered whether it was me: were my Facebook friends just not talking about it?
I tried to override Facebook’s default options to get a straight chronological feed. Some of my friends were indeed talking about Ferguson protests, but the algorithm was not showing the story to me. It was difficult to assess fully, as Facebook keeps switching people back to an algorithmic feed, even if they choose a chronological one.
Instead of news of the Ferguson protests, my own Facebook’s news feed was dominated by the “ice-bucket challenge,” a worthy cause in which people poured buckets of cold water over their heads and, in some cases, donated to an amyotrophic lateral sclerosis (ALS) charity.
There is no publicly available detailed and exact explanation about how the news feed determines which stories are shown high up on a user’s main Facebook page, and which ones are buried.
Facebook’s decisions in the design of its algorithm have great power, especially because there is a tendency for users to stay within Facebook when they are reading the news, and they are often unaware that an algorithm is determining what they see. In one study, 62.5 percent of users had no idea that the algorithm controlling their feed existed, let alone how it worked.
In a Facebook experiment published in Nature that was conducted on a whopping 61 million people, some randomly selected portion of this group received a neutral message to “go
vote,” while others, also randomly selected, saw a slightly more social version of the encouragement: small thumbnail pictures of a few of their friends who reported having voted were shown within the “go vote” pop-up.35 The researchers measured that this slight tweak—completely within Facebook’s control and conducted without the consent or notification of any of the millions of Facebook users—caused about 340,000 additional people to turn out to vote in the 2010 U.S. congressional elections. (The true number may even be higher since the method of matching voter files to Facebook names only
...more
But “Like” is not a neutral signal. How can one “like” a story about a teenager’s death and ongoing, grief-stricken protests? Understandably, many of my friends were not clicking on the “Like” button for stories about the Ferguson protests, which meant that the algorithm was not being told that this was an important story that my social network was quite interested in.
A 2015 study suggested that slight changes to search rankings could shift the voting preferences of undecided voters.39
After three million tweets, the national news media started covering the story too, although not until well after the tweets had surged.
It’s worth pondering if without Twitter’s reverse chronological stream, which allowed its users to amplify content as they choose, unmediated by an algorithmic gatekeeper, the news of unrest and protests might never have made it onto the national agenda.43
Just as attracting mass-media attention through stunts came with political costs, playing to the algorithm comes with political costs as well.
When Facebook tweaked its algorithm to punish sites that strove for this particular kind of virality, Upworthy’s traffic suddenly fell by half.46
Algorithmic governance, it appears, is the future and the new overlords that social movements must grapple with.
This means a world in which social movements can potentially reach hundreds of millions of people after a few clicks without having to garner the resources to challenge or even own mass media, but it also means that their significant and important stories can be silenced by a terms-of-service complaint or by an algorithm. It is a new world for both media and social movements.
Reddit’s design and affordances allow us to explore this question: How does reputation operate when there is little direct connection between a person’s online (Reddit) identity and their offline identity?
As discussed earlier, members of these subreddits, too, developed an internal sense of norms in the service of legitimizing their activities—this is what human communities do.
The position articulated by Reddit managers expresses a grave misunderstanding of how social norms and social movements form in human societies.
Homophily—seeking people who think like you to draw social support—is a universal phenomenon.
Cocooned in his Reddit communities and isolated from mainstream views, Brutsch seemed to think that he had a defensible point of view even after his unmasking, and he took part in interviews that offer fascinating insights into the role of reputation, even in pseudonymous spaces, and into the formation of communities.
Social scientists have long emphasized that “deviance” has no absolute definition;
Digital technologies introduce a range of twists to social mechanisms and dynamics through their affordances.
Critical parameters include whether online and offline identities are tightly or loosely coupled, and whether social interactions online create persistent histories and a reputation that are visible to those with whom users interact.
Social scientists call this the “stranger-on-a-train” effect, describing the way people sometimes open up more to anonymous strangers (in places such as trains) than to the people they see around them every day.
The discussions on YouBeMom range from the mundane to the taboo, and the emotional tone of the threads can range from cathartic to judgmental. It is clear, however, that these conversations could not easily occur without the site’s affordances, which reject both persistent identities and reputation over time.
Arab Spring activists surprised me by mentioning IRC, or internet Relay Chat, a largely forgotten internet chatroom system that allows for easy and anonymous conversation.
Once, I became the subject of a vast amount of online fury simply because I wrote that I disliked the trend toward larger and larger phones because they were becoming hard to use for many women, like me, who have smaller hands.
Bots are the least of their problems, but even bots cause trouble since hundreds and thousands of hostile mentions clog people’s notifications, absorb their attention, and unnerve them. It is not humanly possible to stare at such threats and to casually brush them off every time. Many times, the threats are directed at people’s children, pets, or relatives, making it harder to just shrug them off even if one has decided to accept personal risk.
As with many of the issues I study, it is difficult to have a coherent and unified normative view or a simple rule that would apply in all cases that all doxing is good or bad by itself. There are always trade-offs. These judgments have to be made in the context of whose rights are allowed to trample whose, what ethical values will be protected and which ones disregarded. Will we protect children’s right to be free of sexual exploitation, or the rights of adult men to anonymously gather and exploit?
Is this a form of digital heckler’s veto—when free speech is shut down by organized crowds shouting down speakers they dislike?
Women, minorities, dissidents, and journalists writing about sensitive topics have been especially targeted.
However, defining what constitutes hate speech is difficult—especially if the implementation will be algorithmic or outsourced to low-paid workers.
In theory, Twitter provides a platform she can speak in that is not controlled by the government. In practice, she has been chased away by an orchestrated campaign.
For him, it was a game, one that was made easier by the internet’s facilitation of pseudonymous identities.
There are no easy solutions to problems raised in this chapter. There is no perfect, ideal platform for social movements. There is no neutrality or impartiality—ethics, norms, identities, and compromise permeate all discussions and choices of design, affordances, policies, and algorithms on online platforms.
This is because seemingly similar outcomes and benchmarks—for example, a protest march attended by a hundred thousand people—do not necessarily signal the same underlying capacity to those in power when they are organized with the aid of digital technology as they do when they are organized without such supporting tools.
Alice Marwick examines how technologists and entrepreneurs signaled status and built brands in the technology industry after the dot-com boom and Judith Donath has long examined how identity, deception and signals work online.19
A search in Lexis, a news database, for the phrase “Occupy Wall Street” during the first weeks brings up more international articles about the movement than articles from the U.S. news.
These people, who were accountable to nobody but themselves, who hid their faces behind black balaclavas, and set fires in trash cans, were catnip for the press—a few dozen could generate more press coverage than hundreds of thousands who had marched peacefully, if forcefully.
The activists were able to craft their own narrative and to resist being trivialized.

