Summary of “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist”
I recently read an article in The Atlantic by Tristan Harris, a former Product Manager at Google who studies the ethics of how the design of technology influences people’s psychology and behavior. The piece was titled: “The Binge Breaker” and it covers similar ground to his previous piece “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist.
Harris is also a leader in the “Time Well Spent” movement which favors “technology designed to enhance our humanity over additional screen time. Instead of a ‘time spent’ economy where apps and websites compete for how much time they take from people’s lives, Time Well Spent hopes to re-structure design so apps and websites compete to help us live by our values and spend time well.”
Harris’ basic thesis is that “our collective tech addiction” results more from the technology itself than “on personal failings, like weak willpower.” Our smart phones, tablets, and computers seize our brains and control us, hence Harris’ call for a “Hippocratic oath” for that implores software designers not to exploit “psychological vulnerabilities.” Harris and his colleague Joe Edelman compare “the tech industry to Big Tobacco before the link between cigarettes and cancer was established: keen to give customers more of what they want, yet simultaneously inflicting collateral damage on their lives.”
[I think this analogy is weak. The tobacco industry made a well-documented effort to addict people to their products while there is no compelling evidence of something similarly sinister regarding software companies. In addition, tobacco will literally kill you while obsession with your smart phone will not.]
The social scientific evidence for Harris’ insights began when he was a member of the Stanford Persuasive Technology Lab. “Run by the experimental psychologist B. J. Fogg, the lab has earned a cult-like following among entrepreneurs hoping to master Fogg’s principles of ‘behavior design’—a euphemism for what sometimes amounts to building software that nudges us toward the habits a company seeks to instill.” As a result:
Harris learned that the most-successful sites and apps hook us by tapping into deep-seated human needs … [and] He came to conceive of them as ‘hijacking techniques’—the digital version of pumping sugar, salt, and fat into junk food in order to induce bingeing … McDonald’s hooks us by appealing to our bodies’ craving for certain flavors; Facebook, Instagram, and Twitter hook us by delivering what psychologists call “variable rewards.” Messages, photos, and “likes” appear on no set schedule, so we check for them compulsively, never sure when we’ll receive that dopamine-activating prize.
Harris worked on Gmail’s Inbox app and is “quick to note that while he was there, it was never an explicit goal to increase time spent on Gmail.” In fact,
His team dedicated months to fine-tuning the aesthetics of the Gmail app with the aim of building a more ‘delightful’ email experience. But to him that missed the bigger picture: Instead of trying to improve email, why not ask how email could improve our lives—or, for that matter, whether each design decision was making our lives worse?
[In a minimal way, improving email improves our lives. If the program works, allows us to communicate with our friends, etc., then it makes our lives a bit better. Of course email doesn’t directly help us obtain beauty, truth, knowledge, joy, or world peace, but then that seems to be a lot to ask of an email program. Of course I agree that an ethical company shouldn’t try to addict people to their products if those products are indeed harmful.]
Harris makes a great point when he notes that “Never before in history have the decisions of a handful of designers (mostly men, white, living in SF, aged 25–35) working at 3 companies”—Google, Apple, and Facebook—“had so much impact on how millions of people around the world spend their attention … We should feel an enormous responsibility to get this right.”
Google responded to Harris’ concerns. He met with CEO Larry Page, the company organized internal Q&A sessions [and] he began to research ways Google could adopt ethical design. “But he says he came up against “inertia.” Product road maps had to be followed, and fixing tools that were obviously broken took precedence over systematically rethinking services.” Despite these problems “he justified his decision to work there with the logic that since Google controls three interfaces through which millions engage with technology—Gmail, Android, and Chrome—the company was the “first line of defense.” Getting Google to rethink those products, as he’d attempted to do, had the potential to transform our online experience.”
Harris hope is that:
Rather than dismantling the entire attention economy … companies will … create a healthier alternative to the current diet of tech junk food … As with organic vegetables, it’s possible that the first generation of Time Well Spent software might be available at a premium price, to make up for lost advertising dollars. “Would you pay $7 a month for a version of Facebook that was built entirely to empower you to live your life?,” Harris says. “I think a lot of people would pay for that.” Like splurging on grass-fed beef, paying for services that are available for free and disconnecting for days (even hours) at a time are luxuries that few but the reasonably well-off can afford. I asked Harris whether this risked stratifying tech consumption, such that the privileged escape the mental hijacking and everyone else remains subjected to it. “It creates a new inequality. It does,” Harris admitted. But he countered that if his movement gains steam, broader change could occur, much in the way Walmart now stocks organic produce. Even Harris admits that often when your phone flashes with a new text message it hard to resist. It is hard to feel like you are in control of the process.
Reflections – I begin with a disclaimer. I know almost nothing about software product design. But I did teach philosophical issues in computer science for many years in the computer science department at UT-Austin and I have an abiding interest in philosophy of technology. So let me say a few things.
All technologies have benefits and costs. Air conditioning makes summer endurable; but has the potential to release HFC into the air. Splitting the atom unleashes great power that can be used for good or ill. Robots put people out of work. On balance, I find email a great thing, and in general I think technology, which is applied science, has been the primary force for improving the lives of human beings. So my prejudice is to withhold critique of new technology. Nonetheless the purpose of technology should be to improve our lives, not make us miserable. Obviously.
As for young people considering careers, if you want to make a difference in the world I can think of no better place than at any of the world’s high-tech companies. They have the wealth, power and influence to actually change the world if they see fit. Whether they do that or not is up to the people who work there. So if you want to change the world, join in the battle.