Jump to ratings and reviews
Rate this book

The Reality Game: How the Next Wave of Technology Will Break the Truth

Rate this book
Fake news posts and Twitter trolls were just the beginning. What will happen when misinformation moves from our social media feeds into our everyday lives?

Online disinformation stormed our political process in 2016 and has only worsened since. Yet as Samuel Woolley shows in this urgent book, it may pale in comparison to what's to humanlike automated voice systems, machine learning, "deepfake" AI-edited videos and images, interactive memes, virtual reality, and more. These technologies have the power not just to manipulate our politics, but to make us doubt our eyes and ears and even feelings.

Deeply researched and compellingly written, The Reality Game describes the profound impact these technologies will have on our lives. Each new invention built without regard for its consequences edges us further into this digital dystopia.

Yet Woolley does not despair. Instead, he argues pointedly for a new culture of innovation, one built around accountability and especially transparency. With social media dragging us into a never-ending culture war, we must learn to stop fighting and instead prevent future manipulation. This book shows how we can use our new tools not to control people but to empower them.

272 pages, Hardcover

Published January 7, 2020

Loading interface...
Loading interface...

About the author

Samuel Woolley

6 books21 followers
Dr. Samuel Woolley is a writer, researcher and speaker who specializes in the study of disinformation, emergent technology and life online. He and his colleagues were among the first people to uncover the manipulative political use of social media during political events worldwide. They coined the terms “computational propaganda” and “political bot.”

Woolley’s next book, “The Reality Game: How the Next Wave of Technology Will Break the Truth“, is set to be released in January 2020 by PublicAffairs/Hachette. His previous book, “Computational Propaganda,” (2018) is a series of country case studies on how digital tools were used during elections, national disasters and security crises in attempts manipulate public opinion. It is co-authored with Dr. Philip N. Howard and published by Oxford University Press. Woolley regularly writes publicly on politics and social media for venues including Wired, the Guardian, Motherboard, TechCrunch, Slate and the Atlantic. For his research, he has been featured in the New York Times, the Washington Post, the Wall Street Journal and on NBC’s Today show, PBS’ Frontline and BBC’s News at Ten.

He has worked with numerous academic institutions (Oxford, Stanford, Berkeley), private companies (Alphabet, Deloitte, Allianz), governmental entities (US Senate, UK Parliament, NATO), and civil society groups (German Marshall Fund, Anti-Defamation League, National Endowment for Democracy) to translate the complex empirical impacts of computational propaganda to effects on everyday life. He has given talks and hosted workshops on digital manipulation—as it relates to subjects ranging from policy to vaccination to commerce—at venues including Princeton University, Data and Society, SXSW, BBC Monitoring, and Mishcon de Reya LLP.

Dr. Woolley is a current faculty member in the School of Journalism at the University of Texas at Austin’s Moody College of Communication. He has current and past academic affiliations with the Project on Democracy and the Internet at Stanford University, the Center for Information Technology Research in the Interest of Society (CITRIS) at UC Berkeley, and at the Oxford Internet Institute at the University of Oxford. He is the former Director of Research and Co-Founder of the National Science Foundation and European Research Council supported Computational Propaganda Project at the University of Oxford. He is the Founding Director of the Digital Intelligence Lab at the Institute for the Future, a 50-year-old think tank located in the heart of Silicon Valley. He has held research fellowships at the German Marshall Fund of the United States, the Anti-Defamation league, Google Jigsaw, the Tech Policy Lab at the University of Washington, and the Center for Media, Data and Society at Central European University. His research has been supported by large grants from the Hewlett Foundation, the Open Society Foundations, the New Venture Fund for Communications and the Ford Foundation. His research has informed policy in the United States, United Kingdom and other countries around the world. His PhD is from the University of Washington.

He lives in Austin, Texas with his wife, Samantha, and their dog, Basket. He tweets from @samuelwoolley.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
32 (19%)
4 stars
53 (32%)
3 stars
51 (31%)
2 stars
16 (9%)
1 star
9 (5%)
Displaying 1 - 28 of 28 reviews
Profile Image for Sarah.
109 reviews2 followers
May 4, 2020
This book certainly has a lot of information about what kinds of things can be and are about to be possible to manipulate people’s perceptions of reality on the internet with bots, AI, machine learning , ‘deep fakes’ etc. It Explains in good detail what went on in the 2016 election. He makes the point repeatedly and emphatically that developers should plan for misuse of technology and governments should legislate for abuses of same. Well in the U.S. i just don’t see that happening. We have become a culture of exploitation, get over and make a buck. Samuel Wooley, the author says a couple of times in his conclusion that all the dire warnings should not leave the reader feeling hopeless but I don’t see how not. And it is not because of the technology- it’s because of my fellow man.
Profile Image for Moh. Nasiri.
299 reviews100 followers
September 28, 2020
Often driven by commercial motives, fake news effectively appeals to people’s desire to think critically.
Old media helped to shore up faith in institutions; new media undermines it.

The media landscape has changed beyond recognition. In the early days of the internet, many celebrated the eclipse of gatekeeping institutions like newspapers and broadcasters. But allowing people to pick their own news sources didn’t boost civic participation – it ate away at trust and created fertile conditions for digital disinformation. This problem has been exacerbated by light-touch regulation and the refusal of social media companies to tackle bots. But machine learning, used alongside human fact-checking, might help us win this battle. 

Social media has been weaponized. Authoritarian governments around the world control armies of bots and fake accounts. Their purpose? More often than not, it’s to harass journalists and sow doubt about the work of dissidents.

Efforts to dupe and deceive aren’t just about technology. Underneath the torrent of fake news and the manipulation is a complex web of social, economic, and political problems. 

This means that the real issue isn’t social media itself, which can be a force for good, but the misuse of social media. The real issue, then, is how we can stop the misuse of digital tools. The first step is to figure out why social media is so dysfunctional. 

In this book, you’ll also find out: 

why bots aren’t nearly as smart as you might think; 
how fake news capitalizes on people’s desire to think deeply; and 
why unlimited free speech is not the answer to social media’s problems.

Ref: blinkist.com
Profile Image for Daniel.
634 reviews83 followers
April 14, 2020
Reality is being manipulated on social media to alter our perception.

1. Bots target susceptible victims and spread fake news, such as pizza joint Child prostitution ring run by the Clintons.
2. Bots are mostly controlled by humans and not that smart. AI bots are not too smart and can be easily detected for now.
3. Russians and other countries use Cambridge Analytics to target voters.
4. Partial videos can distort reality by omitting what happened before. Deep fakes are the next worry as they can provide fake evidence of people saying things that they did not.
5. Virtual reality can make it much harder to detect fake news.
6. Social Media employ third party contractors to filter out bad content. However a lot of them are disturbed by what they saw. Also, often Indians are arbitrating whether something is fake news or hate speech/videos.
7. AI can be used to detect deep fakes, by checking whether the speaker blinks.
8. Facebook, twitter and Google are setting up systems to remove fake news and hate speech. But they will never really do away with their proprietary targeting algorithms because that is their rice bowls. So lip service will be paid. Also, it is virtually impossible to block live-stream crimes.

I think what will happen is that people will learn to get most of their news from mainstream news sources, and less on Twitter or facebook shares. It will just lead to further consolidation of the big news companies. Talk about unintended consequence...
Profile Image for Roberta .
1,181 reviews23 followers
February 7, 2020
This book is full of terrifying information about "computational propaganda," how our thoughts and actions are being manipulated and how it is going to get much, much worse. Why am I yawning? Can non-fiction written by a techie be interesting for non-techies to read? See The Cuckoo's Egg.

This book was received as a free ARC from the publisher.

First, I think that the subtitle of this book is an attempt by the publisher to make this book appear timely instead of so last week. A lot of the kinds of disinformation that the author talks about in the book has been around for a while. Disinformation has just gotten bigger and slicker while people haven't gotten any better at handling it. The people who hold elective office and who vote on legislation are mostly over 50. Elected by people who are 50. Do you know how many of your elected officials have their secretaries print out their email and leave it in their IN box? Their grandchildren set up their FB for them. They know about Twitter now. But that's about it. They will be lucky if they get past the first chapter of this book. And, while we're on that topic, the first chapter is full of "This book will..." So nothing even happens until chapter 2. It could have started with one of the examples of disinformation that is used later in the book to show us a real-life example, instead of just telling us that all would be revealed in a later chapter. Well, revealed to the people who are still reading later.

The same topics are covered, briefly, in part, here and there, making successive chapters feel repetitive. References to other things that the author has written "In a recent paper, Ann Ravel, Hamsini Sridharan, and I..." are so frequent that they contribute to the feeling of repetition.

In the conclusion, the author lists six policy actions for "immediately illuminating problems at the intersection of computational propaganda and campaign finance." I wish that the whole book had been laid out as neatly.

One of my pet peeves: Trolls. Page 28: The author seems to think that it is fairly easy to spot a troll but I know that, out here in the real world, people do not spot trolls. They don't even take the very first step in his process - thinking that there might be a troll. They have to suspect a troll before they will take any of the steps that the author lists. And then they have to know how to take those steps. In my experience, most people don't suspect trolls and most people won't take the steps.
Profile Image for Brendan.
84 reviews2 followers
March 1, 2020
I wonder if Goodreads has a major bot problem or not.. Regardless, this is a great yet time sensitive book. The sooner people read this the better. What this book is covering is happening now, especially with the 2020 elections continuously ramping up. Very well sourced and very insightful author.
September 17, 2020
Just another liberal book whining about the 2016 election, in my opinion. He thinks traditional media should be the arbiter of truth on the internet while referring to conservative thought as disinformation. I do not recommend this book. I wish I could get my money back.
116 reviews70 followers
January 12, 2020
I wish the narrative could be more interesting, despite that the facts presented about the use of computational propaganda for political purposes are interesting.
Profile Image for Donna Lake.
2 reviews
January 18, 2022
I found myself getting more and more aggravated towards the end of the book. The author succeeds - for about 3/4 of the book - at concealing his neoliberal bent. The last 1/4 might as well have been a love letter to facebook itself - one big "I can fix him" to corporations like facebook that think they can do business 'the right way'. Spoiler alert: something something master's tools.

The author's idea of preventing political misuse is to make sure bots and other digital assets are *registered and regulated* - instead of simply repealing Citizens United?

He comments on how much of the US's representatives are over 50, but doesn't question why that is?

He mentions how marginalized groups are targets for online malice and manipulation, but conspicuously fails to mention political dissidents - a major target of authoritarian regimes the world over. The insider threat, in short, is largely left unaddressed.

The book gives some valid insight, sandwiched between neoliberal navel-gazing.
Profile Image for Ren.
532 reviews7 followers
November 24, 2019
This book was received as an ARC from the publisher in exchange for an honest review.

A thought provoking piece about where our society is going, and one that should sound as a warning for both those in the tech industry and those who are interested in any form of social media. We all need to be more conscious of what we're reading, and the impacts of how easy advertising is, because no one knows what's out there that could be influencing your opinions. Woolley's deep dissection of how humans view AI and how VR affects our senses is such an informative and intriguing viewpoint, and I'd recommend this to anyone wanting to learn more about cyber propaganda.
Profile Image for Lauren.
514 reviews45 followers
February 18, 2020
If you, like me, are somewhat fascinated - but more scared - of the epidemic of "fake news" and other technologies that are emerging to "break the truth," this is a great read. Woolley, whose litany of experience in studying futures, technology, ethics, hacking, and more is just too long to list, presents a deeply researched, thorough, and honest look at what the landscape around all of this scary tech is today, what its potential is for the future, and how we can stop it. It's really helpful to have someone diagnose exactly what the problem is and what our options are - Woolley actually leaves me feeling like there are options and solutions to this seemingly impossible problem.

Woolley covers everything he sees as a threat to the truth - focusing mostly on the potential to game and hack elections, public opinion, and democracy as we know it. He details social media bots, black hat hacking, machine learning, deep learning, artificial intelligence, deepfake videos, unfaked slanted/biased videos, and much more; he gives us an idea of the tech that's currently available (including what's available but not commonly used because it's too expensive, like unleashing an army of AI bots) and what's on the edge of becoming available, and what's pretty far out in the future. Moreover, he presents all of this in a way that's understandable for the complete tech layperson, providing definitions and explanations everywhere along the way.

More importantly, Woolley outlines how social media has changed the game here. We've heard a lot of these problems before - that social media giants were never built with ethics in mind, that simple, basic bots run rampant and artificially hype up posts that serve a certain narrative, that algorithms prioritize posts with more traffic regardless of if they're true or not, and that the industry is pushing towards self-regulation to solve these issues. The book covers exactly how hard it is to root out fake news (or as Woolley calls it, "computational propaganda") and why self-regulation is impossible.

In the "Conclusion" chapter, he presents concrete action items to addressing the problem, in the form of what governments, companies, and individuals can begin to do. This entails first acknowledging that fake news (actually fake news, not when Trump calls CNN "fake news") is a threat, passing FEC and FCC reform to mandate transparency, passing new antitrust acts (can you believe the most recent ones on the books are more than a century old???), and devising both external and internal regulatory systems. Woolley addresses every facet that I can think of in terms of proposing a solution, including that employees tasked with monitoring and reviewing fake news should be full-time employees of the tech companies themselves and not hired by a third-party contracting company like Cognizant, that automatic detection systems should be assessed for racial bias, and that nonprofits hired to fact-check (e.g. Youtube and Wikimedia) should come with a contract that tech companies cannot interfere in their decision-making and should be compensated fairly.

Overall, this is a fantastic read for anyone interested in educating themselves on the epidemic of fake news and what to do about it - which should be everyone.
Profile Image for Wyndy KnoxCarr.
114 reviews1 follower
March 2, 2020
Or NOT "break" it as in "damage," but "break" it as in "breaking news?" Truth, Freedom, Tech and Global PTS? Former visiting Tech Research Fellow at Cal’s CITRIS Samuel Woolley’s The Reality Game: How the Next Wave of Technology Will Break the Truth makes us think. Think about truth, distortions, and the global and personal ways the “free” (monetized) internet, commercial, political and social media and increasingly “privatized” institutions and angry individuals affect our minds, bodies and actions. Not get frozen in fear or sliced and diced in confusion; but broken open into a more humane and democratic way of experiencing people, places and minds that can make us more intelligent, empathetic and humane.
According to Wooley, there are “always people behind tech(nology)” with their own biases and algorithms often created for “scale” and profit, not human rights and democracy. In the face of “fake news,” ad sales stoking the internet, altered YouTube videos, bots impersonating humans and “Wild West” lack of government regulation and corporate responsibility; “computational propaganda” has been his research and writing focus; along with re-popularizing the critical thinking, media and digital literacy the present and future require.
A marvelous and readable in-depth survey of the quest for truth and justice online, Reality Game balances privacy and security with PR firms out to make a buck spreading disinformation among lone rangers, agents provocateurs, savage and/or unsuspecting hordes of international groups and our own (FB and Google) utopian “dictatorships” who think they can dodge responsibility for the very real harms that result from their malaise. AI (Artificial Intelligence), robotic voices, traumatized Silicon Valley hate and porn weeders and troll-responders working on contract are all here.
“… even the most advanced machines and software systems are still tools. They are only as useful as the people, and motives, behind their creation and implementation,” Woolley says. We’re on the brink of cyber-disaster (again) politically in 2020 and have been in general for at least 14 years, but we have the tools and personal will available to us if we will USE them to keep on turning the industry, government and our fellow surfers around, holding them and ourselves accountable and making sure the changes we vote for and speak out about happen. Knowing many good, bad and ugly histories of the Arab Spring, Occupy Movement and other successes as well as journalist stalking, trolling, doxxing, election tampering from Bolsonaro's Brazil back to Florida's "dimpled chads," and forward into "ethical design" by more than "all white and all male" artificial intelligence engineers, Woolley's a young voice of wisdom and fact-finding as well as hope.
As upbeat an analysis as The Reality Game is, we still know the tech industry, corporate and authoritarian governments and random individuals will be hard challenges to take on. But we will not go backwards. We will go forwards together.
My Internet hero, Doug Engelbart, one of the inventors and designers of the mouse, internet networking and other "computational" tools for utopian and humanistic ends; is often overshadowed by millionaire digital and military industrialists’ mass marketing and use of "personal" isolated, product-centered models; but he went for goals like Woolley's as early as 1968. He
"reasoned that because the complexity of the world's problems was increasing, and because any effort to improve the world would require the coordination of groups of people, the most effective way to solve problems was to augment human intelligence and develop ways of building collective intelligence. He believed that the computer, which was at the time thought of only as a tool for automation, would be an essential tool for future knowledge workers to solve such problems. He was a committed, vocal proponent of the development and use of computers and computer networks to help cope with the world's increasingly urgent and complex problems." (Wikipedia, "Douglas Engelbart," 1 March, 2020)
Are we "augmenting" human intelligence or are we radically narrowing, herding, blunting and dumbing-down for speed and greed? Are we using tech to "cope" with or create more global "urgent and complex problems?" Dr. Samuel C. Woolley's a "qualitative" researcher in a quantitative world of bookselling, grant seeking, digital, academic and political "trolls;" but I believe his heart's in the right place, refusing to let the Truth get "broke" as long and he and fellow utopians stay with the upgrades, underlying motivations and "stay woke" to its more nefarious manipulations and distortions, by "man" and machine, as well as its possibilities...
At the beginning of his book, he quotes Betty Reid Soskin, US National Park Service Ranger at the Rosie the Riveter Park site in Richmond, California: “Every generation I know now has to re-create democracy in its time because democracy will never be fixed. It was not intended to. It’s a participatory form of governance [and] we all have the responsibility to form that more perfect union.”
And I add our personal responsibility, too:
“…you better free your mind instead.” John Lennon, You Say You Want a Revolution.
Hannah Arendt said that Adolf Eichmann's main crime against (himself and his own) humanity was that "he didn't THINK." He “only obeyed orders” and acted like a machine, automaton or object. Going all the way back to Plato, Arendt said, one THINKS, particularly about political actions that will affect others, before taking action, if we are to be human, to be humane.
Before we "like," "delete," “friend,” hit SEND, choose a photo, forward a diatribe, design software or hardware, distort videos, fill in our ballots, stay home on election day or bend algorithms, do we THINK? Especially if we are in the 1%, privileged, powerful, wealthy, racially, sexually or religiously dominant class? The Automation Revolution has overtaken the Industrial Revolution -- where do we go now?

“You say you'll change the constitution
Well you know
We all want to change your head
You tell me it's the institution
Well you know
You better free your mind instead…” © John Lennon, (1988), Imagine.
408 reviews
May 22, 2020
A thoughtful book for sure and worthy of anyone’s time in effort to become more media literate. I found it incomplete. It is certainly important and a part of the information puzzle. The overarching theme that we hold the mega-social media tech companies more accountable is noteworthy. It’s time for the Googles and Facebooks to be held more responsible for the content they are benefit greatly from.
Profile Image for Siobhan.
Author 3 books85 followers
January 23, 2020
The Reality Game, subtitled 'How the next wave of technology will break the truth and what we can do about it', is a book about what Woolley and his colleagues have termed 'computational propaganda', or what other people might think of as online disinformation or 'fake news'. Rather than focusing mostly on what has already happened, Woolley tries to sketch out where different areas—deep fakes, VR, machine learning—might go in the future, and then suggests ways that this might be combatted in the fight against this 'computational propaganda' that threatens our sense of what is true and real. The book looks at tools, but also the human side: what people do and the choices they make that affect how these tools and techniques have been developed and are used.

This is a book about technology that is both pessimistic and trying to offer up possible responses, and is not confined to the usual main points of similar books which focus on AI, big data, and the spread of disinformation on Facebook, but also looks at faked videos and how virtualising humans—through voice or otherwise—may be the future of this kind of falsified content. Woolley does occasionally fall back to imagining (or recounting, at one point) plots for Black Mirror, but he uses this as a way to engage the audience with the occasionally dry topic of technological threat to politics. What makes The Reality Game engaging is both the way that Woolley talks with people involved with many different areas of the issues and tech covered, meaning that the book goes beyond his research, and the way that the concepts are clearly laid out and the buzzwords explained, furthering one of Woolley's later points that digital literacy is crucial for the future of reality.

There are a lot of books out now about 'fake news' or various threats that technology poses to politics, democracy, and reality, but The Reality Game is a good one to go to for a clear summary of some of the existing tech, some speculation about where it could go, and discussion of what might need to be done to improve the prospects for 'reality'. The message isn't that all technology is bad, but that humans have been utilising it in bad ways, and that something needs to change.
Profile Image for Stephen.
89 reviews
February 8, 2023
The spread of misinformation is probably the most concerning issue in the world to me, right now, because it seems to be informing almost everything else. Naturally, a book that promised to explain how the next wave of technology will break the truth seemed like something that would at least help me to see through some imminent methods of propagandistic deception. The problem here is that the examples all seem to fall into either the category of old news, or that of science fiction scenario. While this book is two years old (and must be forgiven for any lagging discourse about Twitter) it makes constant reference to the fact that trolling chatbots and unscrupulous social media company practices have been well known phenomena for years.

As an exception, the computer generated imagery discussion is of great relevance, refreshingly focusing on the impact of those images generated not as art, but as news, which is a bit lost in the discourse I’ve read. Yet, when the author discusses it, he’ll often veer into territory that is more speculative than necessary, imagining a distant future where people are able to impersonate others through deepfake-perfected VR, rather than simply adhering to a discussion of how AI imagery as it exists (or existed in 2020) can be used to mislead. The author makes an effort to distinguish between conspiracy theory and critical thinking, but it doesn’t keep the tone from sounding closer to the former at many points. It ultimately reads as a bit of a manifesto by someone who has read a lot about methods of distorting the truth, but whose knowledge of technology does not wholly derive from the real world.
Profile Image for Ahmi Mo.
15 reviews
November 16, 2020
Ask people about AI and many of them will immediately start telling you about a human-like machine that will destroy us all using fancy guns. That's great and all, but remember that while everyone was worried about cancer and AIDS, COVID-19, a flu-like virus, came from nowhere to do what cancer and AIDS were dreaming of their entire life.

Before worrying about that human-like machine, how about getting worried about the bots that invaded social media years ago and started to alter the reality as they please. Or maybe about the advancements that made it possible to fake videos and make them look like they're real. And what about AR and VR that both enable you to create a fictional world, which can be used by corporates to force you, without you realizing that, to think in a way over the other?

These are not things to come, but they're already here. Many corporates and governments are using them to manipulate the way people think. Even worse, anonymous groups can very easily, and cheaply, create bots to target specific people and alter their behavior.

Imagine an unannounced war where an anonymous group targets a country with false information about the danger of vaccination and why they should stop taking it. Slowly, but surely, this country will be destroyed from inside out without a bullet been fired. In fact, you don't have to imagine, because this is already happening.

Very interesting book that opens your eyes to the danger of the small things. This time, however, these small things are the tiny apps that you may, very easily, ignore.
Profile Image for Synthia Salomon.
836 reviews16 followers
September 27, 2020
This book is about "computational propaganda". I read it because I spend a great deal of time teaching propaganda to middle grade students. My fourth quarter curriculum is dedicated to Animal Farm. I was hoping to take away more modern understandings of how people use digital tools to manipulate politically. New media undermines faith in instituitions that old media helped create. As an English teacher and reader, I find myself fighting against fake news on multiple platforms. In my personal life, people lend credibility to any news I share with them. That in itself is a big responsibility. Fake news appeals to people's desire to think critically and social media doesn't monitor speech much which lends itself to a lot of conspiracies. The US has the first known case of bots interfering in the political process. While we may recognize bots are dumb, that doesn't mean they aren't effective.

Essential Questions:
Should social media companies self regulate?
How should they handle the misuse of their tools?
How can machine learning help us deal with digital disinformation?
What else can we do to restore the democratic process/ How do we fight back?
How does one tell the difference between real and fake news?
What does it mean to be a "digital native"?
How does social media addiction affect reading and internal dialogue?

I care about the future of democracy and I find it beneficial (satisfying even) when tweets are flagged for inaccuracies and other harmful content.
Profile Image for Claire.
619 reviews6 followers
February 8, 2020
I was not the right audience for this book. The solutions were frequently what tech companies should do to either fix or prevent privacy and human rights abuses, not something I have control over. At best I can use the information to lobby legislators on needed regulation. My needs would have been met with an essay that surveyed the types of intrusion more briefly and had one statement of solutions instead of repeating similar sounding solutions at each chapter.

Readers more technically inclined would no doubt react differently.

Reader savvy was briefly mentioned as one safeguard, but very little space was given to developing it.

While I resonate with such generalizations as companies should design "with human rights at the forefront of [their] minds" or that they should factor in human behavior, I fear that they won't overcome the profit motive.
Profile Image for Julia.
4 reviews
April 18, 2022
I reached a bit over half of the book and had to abandon my reading out of boredom. The writing is centered in edging the reader, giving interesting introductions and creating anecdotes to engage your imagination, and then ends up leaving us without any relevant, non-obvious, information at all. It is a great starting point into the subject, highlights different points of view and explains some aspects very well, but in the end the explanations are confusing and involve a lot of speculation instead of concrete information. I usually highlight important informations in non-fiction books i read, and found it hard to find something to highlight in this one as it is very redundant and repetitive. Do I regret reading it? No. Did I enjoy it? Not as much as I’d like to.
Profile Image for Quinns Pheh.
418 reviews12 followers
September 27, 2020
The days of gatekeeping institutions like newspapers and broadcasters are long gone in many parts of the world. All in the hopes that people would pick their own news sources which could bring civic participation. However, that did not happened, as it ate away trust and created fertile conditions for digital disinformation. Also, this issue has been exacerbated by poor social media management and companies’ will to tackle bots. In the author’s view, machine learning, used alongside human fact-checking would have a chance in helping us to win this battle.
Profile Image for Thijs.
Author 2 books4 followers
February 17, 2021
A book that is mostly about computational/AI propaganda. Woolley gives an interesting and extensive overview of different 'computational propaganda' cases in the present and near future. I would have liked to see some in-depth explanations about why these manipulation games work. And a more nuanced ending to deal with computational propaganda instead of (and now I'm simplifying Woolley's pov a little bit) dissolving 'the companies that produced the tools used to game the truth'. Overall it's well written, good examples, and a nice read.
Profile Image for Lee Barry.
Author 19 books14 followers
January 22, 2022
One could watch the author's book talk to get the gist of the book. I think it is quite good, although already a bit dated. One talk was given in January 2020 and look what happened a year later. I like the idea of "Public Interest Technologists", which we all have to be to some degree. Gone are the days when we can just use the internet and not think about how democracy could be adversely affected. That's hard when you thought it was only for promoting it.

FYI: Ethical OS
1 review
September 17, 2020
the author skillfully almost totally avoid China in a book dedicated to fake news, misinformation and disinformation. For that I must commend China for its great achievement in disinformation. When people automatically, willingly, deliberately ignore any bad news about China, even when their work require them to scrutinize Chinese government's deeds, its work is well done. Bravo to Chinese government and 1 star to an author kowtow to it.
Profile Image for Jenn Adams.
1,524 reviews5 followers
July 13, 2022

I found this to be both interesting and informative. So much that I had surface level knowledge of (e.g. deep fakes) but this delved into some of the tech details and also discussion of the dangers and societal concerns. Honestly, I wonder how much has happened around this topic even just in the two years since this was published.
Profile Image for Loree.
136 reviews
February 3, 2021
This is a book that took me quite a bit of time to get through, but I felt the topic so important to our democracy that I persevered. I’m glad I did.
If you have ever shared anything on social media you should read this book.
Profile Image for Josh Berthume.
99 reviews3 followers
May 16, 2020
If you are at all interested in the connection between disinformation and technology, this book is for you.
Profile Image for Nestor Ramos.
174 reviews
June 16, 2022
Excellent book about social media, outstanding ideas on how reality is manipulated by those in power, and a lot of examples and tips. An eye-opening book.
Displaying 1 - 28 of 28 reviews

Can't find what you're looking for?

Get help and learn more about the design.