Tim Harford's Blog, page 83
March 17, 2017
The Problem With Facts
1.
Just before Christmas 1953, the bosses of America’s leading tobacco companies met John Hill, the founder and chief executive of one of America’s leading public relations firms, Hill & Knowlton. Despite the impressive surroundings — the Plaza Hotel, overlooking Central Park in New York — the mood was one of crisis.
Scientists were publishing solid evidence of a link between smoking and cancer. From the viewpoint of Big Tobacco, more worrying was that the world’s most read publication, The Reader’s Digest, had already reported on this evidence in a 1952 article, “Cancer by the Carton”. The journalist Alistair Cooke, writing in 1954, predicted that the publication of the next big scientific study into smoking and cancer might finish off the industry.
It did not. PR guru John Hill had a plan — and the plan, with hindsight, proved tremendously effective. Despite the fact that its product was addictive and deadly, the tobacco industry was able to fend off regulation, litigation and the idea in the minds of many smokers that its products were fatal for decades.
So successful was Big Tobacco in postponing that day of reckoning that their tactics have been widely imitated ever since. They have also inspired a thriving corner of academia exploring how the trick was achieved. In 1995, Robert Proctor, a historian at Stanford University who has studied the tobacco case closely, coined the word “agnotology”. This is the study of how ignorance is deliberately produced; the entire field was started by Proctor’s observation of the tobacco industry. The facts about smoking — indisputable facts, from unquestionable sources — did not carry the day. The indisputable facts were disputed. The unquestionable sources were questioned. Facts, it turns out, are important, but facts are not enough to win this kind of argument.
2.
Agnotology has never been more important. “We live in a golden age of ignorance,” says Proctor today. “And Trump and Brexit are part of that.”
In the UK’s EU referendum, the Leave side pushed the false claim that the UK sent £350m a week to the EU. It is hard to think of a previous example in modern western politics of a campaign leading with a transparent untruth, maintaining it when refuted by independent experts, and going on to triumph anyway. That performance was soon to be eclipsed by Donald Trump, who offered wave upon shameless wave of demonstrable falsehood, only to be rewarded with the presidency. The Oxford Dictionaries declared “post-truth” the word of 2016. Facts just didn’t seem to matter any more.
The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts. Fact-checking organisations, such as Full Fact in the UK and PolitiFact in the US, evaluate prominent claims by politicians and journalists. I should confess a personal bias: I have served as a fact checker myself on the BBC radio programme More or Less, and I often rely on fact-checking websites. They judge what’s true rather than faithfully reporting both sides as a traditional journalist would. Public, transparent fact checking has become such a feature of today’s political reporting that it’s easy to forget it’s barely a decade old.
Mainstream journalists, too, are starting to embrace the idea that lies or errors should be prominently identified. Consider a story on the NPR website about Donald Trump’s speech to the CIA in January: “He falsely denied that he had ever criticised the agency, falsely inflated the crowd size at his inauguration on Friday . . . —” It’s a bracing departure from the norms of American journalism, but then President Trump has been a bracing departure from the norms of American politics.
Facebook has also drafted in the fact checkers, announcing a crackdown on the “fake news” stories that had become prominent on the network after the election. Facebook now allows users to report hoaxes. The site will send questionable headlines to independent fact checkers, flag discredited stories as “disputed”, and perhaps downgrade them in the algorithm that decides what each user sees when visiting the site.
We need some agreement about facts or the situation is hopeless. And yet: will this sudden focus on facts actually lead to a more informed electorate, better decisions, a renewed respect for the truth? The history of tobacco suggests not. The link between cigarettes and cancer was supported by the world’s leading medical scientists and, in 1964, the US surgeon general himself. The story was covered by well-trained journalists committed to the values of objectivity. Yet the tobacco lobbyists ran rings round them.
In the 1950s and 1960s, journalists had an excuse for their stumbles: the tobacco industry’s tactics were clever, complex and new. First, the industry appeared to engage, promising high-quality research into the issue. The public were assured that the best people were on the case. The second stage was to complicate the question and sow doubt: lung cancer might have any number of causes, after all. And wasn’t lung cancer, not cigarettes, what really mattered? Stage three was to undermine serious research and expertise. Autopsy reports would be dismissed as anecdotal, epidemiological work as merely statistical, and animal studies as irrelevant. Finally came normalisation: the industry would point out that the tobacco-cancer story was stale news. Couldn’t journalists find something new and interesting to say?
Such tactics are now well documented — and researchers have carefully examined the psychological tendencies they exploited. So we should be able to spot their re-emergence on the political battlefield.
“It’s as if the president’s team were using the tobacco industry’s playbook,” says Jon Christensen, a journalist turned professor at the University of California, Los Angeles, who wrote a notable study in 2008 of the way the tobacco industry tugged on the strings of journalistic tradition.
One infamous internal memo from the Brown & Williamson tobacco company, typed up in the summer of 1969, sets out the thinking very clearly: “Doubt is our product.” Why? Because doubt “is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy.” Big Tobacco’s mantra: keep the controversy alive.
Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again.
3.
Tempting as it is to fight lies with facts, there are three problems with that strategy. The first is that a simple untruth can beat off a complicated set of facts simply by being easier to understand and remember. When doubt prevails, people will often end up believing whatever sticks in the mind. In 1994, psychologists Hollyn Johnson and Colleen Seifert conducted an experiment in which people read an account of an explosive warehouse fire. The account mentioned petrol cans and paint but later explained that petrol and paint hadn’t been present at the scene after all. The experimental subjects, tested on their comprehension, recalled that paint wasn’t actually there. But when asked to explain facts about the fire (“why so much smoke?”), they would mention the paint. Lacking an alternative explanation, they fell back on a claim they had already acknowledged was wrong. Once we’ve heard an untrue claim, we can’t simply unhear it.
This should warn us not to let lie-and-rebuttal take over the news cycle. Several studies have shown that repeating a false claim, even in the context of debunking that claim, can make it stick. The myth-busting seems to work but then our memories fade and we remember only the myth. The myth, after all, was the thing that kept being repeated. In trying to dispel the falsehood, the endless rebuttals simply make the enchantment stronger.
With this in mind, consider the Leave campaign’s infamous bus-mounted claim: “We send the EU £350m a week.” Simple. Memorable. False. But how to rebut it? A typical effort from The Guardian newspaper was headlined, “Why Vote Leave’s £350m weekly EU cost claim is wrong”, repeating the claim before devoting hundreds of words to gnarly details and the dictionary definition of the word “send”. This sort of fact-checking article is invaluable to a fellow journalist who needs the issues set out and hyperlinked. But for an ordinary voter, the likely message would be: “You can’t trust politicians but we do seem to send a lot of money to the EU.” Doubt suited the Leave campaign just fine.
This is an inbuilt vulnerability of the fact-checking trade. Fact checkers are right to be particular, to cover all the details and to show their working out. But that’s why the fact-checking job can only be a part of ensuring that the truth is heard.
Andrew Lilico, a thoughtful proponent of leaving the EU, told me during the campaign that he wished the bus had displayed a more defensible figure, such as £240m. But Lilico now acknowledges that the false claim was the more effective one. “In cynical campaigning terms, the use of the £350m figure was perfect,” he says. “It created a trap that Remain campaigners kept insisting on jumping into again and again and again.”
Quite so. But not just Remain campaigners — fact-checking journalists too, myself included. The false claim was vastly more powerful than a true one would have been, not because it was bigger, but because everybody kept talking about it.
Proctor, the tobacco industry historian turned agnotologist, warns of a similar effect in the US: “Fact checkers can become Trump’s poodle, running around like an errand boy checking someone else’s facts. If all your time is [spent] checking someone else’s facts, then what are you doing?”
4.
There’s a second reason why facts don’t seem to have the traction that one might hope. Facts can be boring. The world is full of things to pay attention to, from reality TV to your argumentative children, from a friend’s Instagram to a tax bill. Why bother with anything so tedious as facts?
Last year, three researchers — Seth Flaxman, Sharad Goel and Justin Rao — published a study of how people read news online. The study was, on the face of it, an inquiry into the polarisation of news sources. The researchers began with data from 1.2 million internet users but ended up examining only 50,000. Why? Because only 4 per cent of the sample read enough serious news to be worth including in such a study. (The hurdle was 10 articles and two opinion pieces over three months.) Many commentators worry that we’re segregating ourselves in ideological bubbles, exposed only to the views of those who think the same way we do. There’s something in that concern. But for 96 per cent of these web surfers the bubble that mattered wasn’t liberal or conservative, it was: “Don’t bother with the news.”
In the war of ideas, boredom and distraction are powerful weapons. A recent study of Chinese propaganda examined the tactics of the paid pro-government hacks (known as the “50 cent army”, after the amount contributors were alleged to be paid per post) who put comments on social media. The researchers, Gary King, Jennifer Pan and Margaret Roberts, conclude: “Almost none of the Chinese government’s 50c party posts engage in debate or argument of any kind . . . they seem to avoid controversial issues entirely . . . the strategic objective of the regime is to distract and redirect public attention.”
Trump, a reality TV star, knows the value of an entertaining distraction: simply pick a fight with Megyn Kelly, The New York Times or even Arnold Schwarzenegger. Isn’t that more eye-catching than a discussion of healthcare reform?
The tobacco industry also understood this point, although it took a more highbrow approach to generating distractions. “Do you know about Stanley Prusiner?” asks Proctor.
Prusiner is a neurologist. In 1972, he was a young researcher who’d just encountered a patient suffering from Creutzfeldt-Jakob disease. It was a dreadful degenerative condition then thought to be caused by a slow-acting virus. After many years of study, Prusiner concluded that the disease was caused instead, unprecedentedly, by a kind of rogue protein. The idea seemed absurd to most experts at the time, and Prusiner’s career began to founder. Promotions and research grants dried up. But Prusiner received a source of private-sector funding that enabled him to continue his work. He was eventually vindicated in the most spectacular way possible: with a Nobel Prize in Medicine in 1997. In his autobiographical essay on the Nobel Prize website, Prusiner thanked his private-sector benefactors for their “crucial” support: RJ Reynolds, maker of Camel cigarettes.
The tobacco industry was a generous source of research funds, and Prusiner wasn’t the only scientist to receive both tobacco funding and a Nobel Prize. Proctor reckons at least 10 Nobel laureates are in that position. To be clear, this wasn’t an attempt at bribery. In Proctor’s view, it was far more subtle. “The tobacco industry was the leading funder of research into genetics, viruses, immunology, air pollution,” says Proctor. Almost anything, in short, except tobacco. “It was a massive ‘distraction research’ project.” The funding helped position Big Tobacco as a public-spirited industry but Proctor considers its main purpose was to produce interesting new speculative science. Creutzfeldt-Jakob disease may be rare, but it was exciting news. Smoking-related diseases such as lung cancer and heart disease aren’t news at all.
The endgame of these distractions is that matters of vital importance become too boring to bother reporting. Proctor describes it as “the opposite of terrorism: trivialism”. Terrorism provokes a huge media reaction; smoking does not. Yet, according to the US Centers for Disease Control, smoking kills 480,000 Americans a year. This is more than 50 deaths an hour. Terrorists have rarely managed to kill that many Americans in an entire year. But the terrorists succeed in grabbing the headlines; the trivialists succeed in avoiding them.
Tobacco industry lobbyists became well-practised at persuading the media to withhold or downplay stories about the dangers of cigarettes. “That record is scratched,” they’d say. Hadn’t we heard such things before?
Experienced tobacco watchers now worry that Trump may achieve the same effect. In the end, will people simply start to yawn at the spectacle? Jon Christensen, at UCLA, says: “I think it’s the most frightening prospect.”
On the other hand, says Christensen, there is one saving grace. It is almost impossible for the US president not to be news. The tobacco lobby, like the Chinese government, proved highly adept at pointing the spotlight elsewhere. There are reasons to believe that will be difficult for Trump.
5.
There’s a final problem with trying to persuade people by giving them facts: the truth can feel threatening, and threatening people tends to backfire. “People respond in the opposite direction,” says Jason Reifler, a political scientist at Exeter University. This “backfire effect” is now the focus of several researchers, including Reifler and his colleague Brendan Nyhan of Dartmouth.
In one study, conducted in 2011, Nyhan, Reifler and others ran a randomised trial in which parents with young children were either shown or not shown scientific information debunking an imaginary but widely feared link between vaccines and autism. At first glance, the facts were persuasive: parents who saw the myth-busting science were less likely to believe that the vaccine could cause autism. But parents who were already wary of vaccines were actually less likely to say they’d vaccinate their children after being exposed to the facts — despite apparently believing those facts.
What’s going on? “People accept the corrective information but then resist in other ways,” says Reifler. A person who feels anxious about vaccination will subconsciously push back by summoning to mind all the other reasons why they feel vaccination is a bad idea. The fear of autism might recede, but all the other fears are stronger than before.
It’s easy to see how this might play out in a political campaign. Say you’re worried that the UK will soon be swamped by Turkish immigrants because a Brexit campaigner has told you (falsely) that Turkey will soon join the EU. A fact checker can explain that no Turkish entry is likely in the foreseeable future. Reifler’s research suggests that you’ll accept the narrow fact that Turkey is not about to join the EU. But you’ll also summon to mind all sorts of other anxieties: immigration, loss of control, the proximity of Turkey to Syria’s war and to Isis, terrorism and so on. The original lie has been disproved, yet its seductive magic lingers.
The problem here is that while we like to think of ourselves as rational beings, our rationality didn’t just evolve to solve practical problems, such as building an elephant trap, but to navigate social situations. We need to keep others on our side. Practical reasoning is often less about figuring out what’s true, and more about staying in the right tribe.
An early indicator of how tribal our logic can be was a study conducted in 1954 by Albert Hastorf, a psychologist at Dartmouth, and Hadley Cantril, his counterpart at Princeton. Hastorf and Cantril screened footage of a game of American football between the two college teams. It had been a rough game. One quarterback had suffered a broken leg. Hastorf and Cantril asked their students to tot up the fouls and assess their severity. The Dartmouth students tended to overlook Dartmouth fouls but were quick to pick up on the sins of the Princeton players. The Princeton students had the opposite inclination. They concluded that, despite being shown the same footage, the Dartmouth and Princeton students didn’t really see the same events. Each student had his own perception, closely shaped by his tribal loyalties. The title of the research paper was “They Saw a Game”.
A more recent study revisited the same idea in the context of political tribes. The researchers showed students footage of a demonstration and spun a yarn about what it was about. Some students were told it was a protest by gay-rights protesters outside an army recruitment office against the military’s (then) policy of “don’t ask, don’t tell”. Others were told that it was an anti-abortion protest in front of an abortion clinic.
Despite looking at exactly the same footage, the experimental subjects had sharply different views of what was happening — views that were shaped by their political loyalties. Liberal students were relaxed about the behaviour of people they thought were gay-rights protesters but worried about what the pro-life protesters were doing; conservative students took the opposite view. As with “They Saw a Game”, this disagreement was not about the general principles but about specifics: did the protesters scream at bystanders? Did they block access to the building? We see what we want to see — and we reject the facts that threaten our sense of who we are.
When we reach the conclusion that we want to reach, we’re engaging in “motivated reasoning”. Motivated reasoning was a powerful ally of the tobacco industry. If you’re addicted to a product, and many scientists tell you it’s deadly, but the tobacco lobby tells you that more research is needed, what would you like to believe? Christensen’s study of the tobacco public relations campaign revealed that the industry often got a sympathetic hearing in the press because many journalists were smokers. These journalists desperately wanted to believe their habit was benign, making them ideal messengers for the industry.
Even in a debate polluted by motivated reasoning, one might expect that facts will help. Not necessarily: when we hear facts that challenge us, we selectively amplify what suits us, ignore what does not, and reinterpret whatever we can. More facts mean more grist to the motivated reasoning mill. The French dramatist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Modern social science agrees.
On a politically charged issue such as climate change, it feels as though providing accurate information about the science should bring people together. The opposite is true, says Dan Kahan, a law and psychology professor at Yale and one of the researchers on the study into perceptions of a political protest. Kahan writes: “Groups with opposing values often become more polarised, not less, when exposed to scientifically sound information.”
When people are seeking the truth, facts help. But when people are selectively reasoning about their political identity, the facts can backfire.
6.
All this adds up to a depressing picture for those of us who aren’t ready to live in a post-truth world. Facts, it seems, are toothless. Trying to refute a bold, memorable lie with a fiddly set of facts can often serve to reinforce the myth. Important truths are often stale and dull, and it is easy to manufacture new, more engaging claims. And giving people more facts can backfire, as those facts provoke a defensive reaction in someone who badly wants to stick to their existing world view. “This is dark stuff,” says Reifler. “We’re in a pretty scary and dark time.”
Is there an answer? Perhaps there is.
We know that scientific literacy can actually widen the gap between different political tribes on issues such as climate change — that is, well-informed liberals and well-informed conservatives are further apart in their views than liberals and conservatives who know little about the science. But a new research paper from Dan Kahan, Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson explores the role not of scientific literacy but of scientific curiosity.
The researchers measured scientific curiosity by asking their experimental subjects a variety of questions about their hobbies and interests. The subjects were offered a choice of websites to read for a comprehension test. Some went for ESPN, some for Yahoo Finance, but those who chose Science were demonstrating scientific curiosity. Scientifically curious people were also happier to watch science documentaries than celebrity gossip TV shows. As one might expect, there’s a correlation between scientific knowledge and scientific curiosity, but the two measures are distinct.
What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”
So how can we encourage curiosity? It’s hard to make banking reform or the reversibility of Article 50 more engaging than football, Game of Thrones or baking cakes. But it does seem to be what’s called for. “We need to bring people into the story, into the human narratives of science, to show people how science works,” says Christensen.
We journalists and policy wonks can’t force anyone to pay attention to the facts. We have to find a way to make people want to seek them out. Curiosity is the seed from which sensible democratic decisions can grow. It seems to be one of the only cures for politically motivated reasoning but it’s also, into the bargain, the cure for a society where most people just don’t pay attention to the news because they find it boring or confusing.
What we need is a Carl Sagan or David Attenborough of social science — somebody who can create a sense of wonder and fascination not just at the structure of the solar system or struggles of life in a tropical rainforest, but at the workings of our own civilisation: health, migration, finance, education and diplomacy.
One candidate would have been Swedish doctor and statistician Hans Rosling, who died in February. He reached an astonishingly wide audience with what were, at their heart, simply presentations of official data from the likes of the World Bank.
He characterised his task as telling people the facts — “to describe the world”. But the facts need a champion. Facts rarely stand up for themselves — they need someone to make us care about them, to make us curious. That’s what Rosling did. And faced with the apocalyptic possibility of a world where the facts don’t matter, that is the example we must follow.
Written for and first published in the Financial Times.
My book “Messy” is available online in the US and UK or in good bookshops everywhere.
Free email updates
(You can unsubscribe at any time)
Email Address
March 15, 2017
Society and the profiteroles paradox
Ken is in a restaurant, pondering his choice of dessert. Ice cream, profiteroles or a cheese plate? He’s about to request a scoop of ice cream when the waiter informs him that the profiteroles are off the menu. “I see,” says Ken. “Well, I’ll have the cheese, please.”
Ken’s behaviour is odd enough to be a piece of surrealist comedy. But what seems ludicrous from an individual is easy to imagine in an election. Think of George W Bush as ice cream, Ralph Nader as profiteroles and Al Gore as the cheese plate. If Nader had not been on the menu in the 2000 US presidential election, then Gore would have been president instead of Bush. Since Nader himself was never a serious contender, it seems odd that his presence changed the result. But we’ve grown used to this sort of thing in politics.
Still, we might ask: is there a way to assemble individual preferences into social preferences without generating surreal outcomes? That was the first of many big problems studied by the great economist Kenneth Arrow, who died last month at the age of 95. His answer: no.
To understand Arrow’s answer, imagine a society in which every individual has a ranking expressing their preferences over every possible outcome. Let’s say that we can read minds, so we know what each person’s ranking is. All we need is some system for combining those individual rankings into a social ranking that tells us what society as a whole prefers.
Arrow named this putative system a constitution. What properties would we like our constitution to have? It should be comprehensive, giving us an answer no matter what the individual rankings might be. And it wouldn’t fall prey to the profiteroles paradox: if society prefers ice cream to cheese, then whether profiteroles are available or not shouldn’t change that fact.
We want the constitution to reflect people’s preferences in common-sense ways. If everyone expresses the same preference, for example, the constitution should reflect that. And we shouldn’t have a dictator — an individual who is a kind of swing voter, where the constitution reflects only her preferences and ignores everyone else.
None of these properties seem particularly stringent — which makes Arrow’s discovery all the more striking. Arrow’s “impossibility theorem” proves that no constitution can satisfy all of them. Any comprehensive constitution will suffer the profiteroles paradox, or arbitrarily ignore individual preferences, or will simply install a dictator. How can this be?
Let me now attempt the nerdiest move in more than 11 years of writing this column. Since there’s no idea in economics more beautiful than Arrow’s impossibility theorem, I’m going to try to outline a proof for you — very sketchily, but you may get the idea.
Imagine that our constitution must deliver a choice between ice cream, profiteroles and cheese. Step one in the proof is to note that there must be a group whose preferences determine whether society as a whole prefers cheese or ice cream — if only because the constitution must respect a unanimous view on this. Call this group the Cheese Group. The Cheese Group might include everyone in society, but maybe it’s a smaller group of swing voters.
The next step is to show that the Cheese Group doesn’t merely swing the decision between cheese and ice cream, but also over profiteroles and any other dessert we might add to the menu. We can show this by creating cases where it’s impossible for the Cheese Group to express a preference between cheese and ice cream without profiteroles being caught in the middle. This means that the Cheese Group actually gets to decide about everything, not just cheese and ice cream.
Finally, having established that the Cheese Group is all-powerful, we show that we can make it smaller without destroying its power. Specifically, we can keep dividing it into pairs of sub-groups, and show that at each division either one of the sub-groups is all-powerful, or the other one is.
In short: we prove that if any group of voters gets to decide one thing, that group gets to decide everything, and we prove that any group of decisive voters can be pared down until there’s only one person in it. That person is the dictator. Our perfection constitution is in tatters.
That’s Arrow’s impossibility theorem. But what does it really tell us? One lesson is to abandon the search for a perfect voting system. Another is to question his requirements for a good constitution, and to look for alternatives. For example, we could have a system that allows people to register the strength of their feeling. What about the person who has a mild preference for profiteroles over ice cream but who loathes cheese? In Arrow’s constitution there’s no room for strong or weak desires, only for a ranking of outcomes. Maybe that’s the problem.
Arrow’s impossibility theorem is usually described as being about the flaws in voting systems. But there’s a deeper lesson under its surface. Voting systems are supposed to reveal what societies really want. But can a society really want anything coherent at all? Arrow’s theorem drives a stake through the heart of the very idea. People might have coherent preferences, but societies cannot. We will always find ourselves choosing ice cream, then switching to cheese because the profiteroles are off.
Written for and first published in the Financial Times.
My book “Messy” is available online in the US and UK or in good bookshops everywhere.
Free email updates
(You can unsubscribe at any time)
Email Address
March 10, 2017
Undercover Friday 6
I’ve been on the road, but a few recommendations…
Gameshow: Stephen “Freakonomics” Dubner is having a lot of fun with “Tell Me Something I Don’t Know“, a wonderfully nerdy gameshow podcast. I recorded an episode at 6th and I in Washington DC on Monday. Not sure when it will air, but what a wonderful atmosphere. And I got to meet this remarkable lady.
Podcast episode: I loved Sebastian Mallaby on macro-musings; fascinating detail on the life of Alan Greenspan. Mallaby’s Greenspan biography is The Man Who Knew (US, UK).
Books: I’ve been reading Daniel Dennett’s Intuition Pumps and Other Tools For Thinking (US, UK) (good, although more philosophical and less practical than I was expecting) and from Gilovich and Ross The Wisest One In The Room (US, UK) (popular social psychology; easy to read and plenty of stuff I didn’t know). Both recommended.
Long Read: I’m on the cover of the FT Magazine tomorrow with The Problem With Facts, a feature article on post-truth politics and why fact-checking is such a thankless task. Enjoy!
March 7, 2017
Has Facebook ruined the news?
“Our goal is to build the perfect personalised newspaper for every person in the world,” said Facebook’s Mark Zuckerberg in 2014. This newspaper would “show you the stuff that’s going to be most interesting to you”.
To many, that statement explains perfectly why Facebook is such a terrible source of news. A “fake news” story proclaiming that Pope Francis had endorsed Donald Trump was, according to an analysis from BuzzFeed, the single most successful item of news on Facebook in the three months before the US election. If that’s what the site’s algorithms decide is interesting, it’s far from being a “perfect newspaper”.
It’s no wonder that Zuckerberg found himself on the back foot after Trump’s election. Shortly after his victory, Zuckerberg declared: “I think the idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way . . . is a pretty crazy idea.” His comment was greeted with a scornful response.
I should confess my own biases here. I despise Facebook for all the reasons people usually despise Facebook (privacy, market power, distraction, fake-smile social interactions and the rest). And, as a loyal FT columnist, I need hardly point out that the perfect newspaper is the one you’re reading right now.
But, despite this, I’m going to stand up for Zuckerberg, who recently posted a 5,700-word essay defending social media. What he says in the essay feels like it must be wrong. But the data suggest that he’s right. Fake news can stoke isolated incidents of hatred and violence. But neither fake news nor the algorithmically driven “filter bubble” is a major force in the overall media landscape. Not yet.
“Fake news” is a phrase that has already been debased. A useful definition is that fake news is an entirely fabricated report presenting itself as a news story. This excludes biased reporting, satire and lies from politicians themselves.
At first glance, such hoaxes appear to be ubiquitous on Facebook. The BuzzFeed analysis finds that the five most popular hoax stories were more successful than the five most popular true stories. (This list of true stories includes the New York Post’s “Melania Trump’s Girl-on-Girl Photos From Racy Shoot Revealed”, a reminder that not all mainstream journalism is likely to win a Pulitzer.)
But hoax stories are less significant than this analysis suggests — partly because Facebook is not the main source of news for Americans (that’s still television news), and partly because true reports will generally be covered in some form by dozens of outlets, which will dilute the popularity of any one version. Each hoax, however, is unique. No wonder the most popular hoaxes outperform the most popular true reports.
In January 2017, two economists, Hunt Allcott and Matthew Gentzkow, published research studying exactly how prevalent fake news had been before the election. Their clever method tested people’s recall of fake news, as compared with true news stories and “placebo” stories — fake fake news, invented by the researchers. People didn’t remember many fake news stories, and claimed to remember quite a few placebos. Overall, there just didn’t seem to be enough fake news to swing the election result — unless it was potent stuff indeed, even in small doses.
“The average voter saw one fake news story before the election,” Gentzkow told me. “That number is a very different picture from what you might get from watching the public discussion.”
Of more concern is that Facebook — and its “most interesting to you” algorithm — simply supplies news that panders to each user’s ideological biases. It’s undoubtedly true that we surround ourselves with people who agree with us on social media. But it’s not clear that Facebook’s algorithm is the biggest problem here. Twitter was politically polarised even in the days when it used no algorithm at all. And newspapers have ideological biases too.
One recent study of online news reading was conducted by Seth Flaxman, Sharad Goel and Justin Rao, who had access to browser data from Microsoft, and used it to examine how people consumed news online. They found a mixed picture: social media did seem to push stories that were further from the centre of the political spectrum but they also exposed people to a greater variety of ideological viewpoints. That makes sense. Reading the same newspaper every day is a filter bubble too.
Gentzkow studied the contrast between online and offline news using data from 2004-2009, working with fellow economist Jesse Shapiro. They found little evidence then that online news consumption was more polarised than traditional media. But things are changing quickly. “My guess is that segregation is noticeably and meaningfully higher than in the past,” Gentzkow says, “but still quite modest.”
This feels like an important moment. Fake news is not prevalent, but it could become so. Filter bubbles are probably no worse than they have been for decades — but that could change rapidly too.
“A lot ultimately hinges on what the motivations of American voters are,” says Gentzkow. “Do people actually care at all about getting the truth and having accurate information?”
He’s hopeful that, deep down, people watch and read the news because they want to learn about the world. But if what voters really want is to be lied to, then Facebook is the least of our problems.
Written for and first published in the Financial Times.
My book “Messy” is available online in the US and UK or in good bookshops everywhere.
Free email updates
(You can unsubscribe at any time)
Email Address
March 6, 2017
Oxford Literary Festival event
March 2, 2017
The real answer to the problem of texting while driving
The UK government is — again — cracking down on driving while using a mobile phone. Tougher sanctions and sharper enforcement will no doubt make some difference. But the real risk of driving while impaired — either drunk, or using a phone — is not the risk of losing your licence. It’s the risk of being in a serious accident. That’s not enough to change the behaviour of some people. What will?
A cardinal rule of behaviour change is: make it easy.
A fine example is the idea of the “designated driver”, the person who stays sober and drives his or her inebriated friends home. It’s a clever concept. The designated driver is the hero, “the life of the party”, who makes it possible for everyone else to drink socially. Friends take turns to be the designated driver, tapping into deep habits of reciprocity. And the question, “who’s the designated driver?” reinforces the social norm that drunk-driving just isn’t right.
What’s the equivalent for texting while driving? It’s not immediately obvious. Distracted driving, like drunk-driving, is dangerous. But the parallel is imperfect because the decision-making process is very different. Having some drinks with friends, knowing I must drive later, is one kind of stupidity. Glancing at a phone which pings at me as I drive across town, then impulsively trying to tap out a reply, is a different kind.
Many of us have deeply ingrained habits of checking our phones and responding to their beeps. That’s not an accidental glitch in the interface: our phones are designed to interrupt us. Ad-funded apps need to attract our attention as often as possible. Public safety demands that we “make it easy” to ignore our phones while driving; the phones themselves want the exact opposite.
Most phones have an “airplane mode”, but not an obvious “drive mode”, despite the fact that your phone is vastly more likely to cause an accident in a car than in a plane. That should change. Smartphones should have, as standard, an easily accessible, well-publicised drive mode. Drive modes do exist, and in the US, the National Highway Traffic Safety Administration has been pushing the idea. But they’re not prominent.
Drive-mode phones might automatically read out text messages, automatically reply to such messages with “sorry, I’m driving”, and send incoming calls directly to voice mail — while allowing drivers to play music and use satellite navigation. In short, drive-mode phones would stop pestering us for our attention.
But why aren’t drive modes more popular? Perhaps we’re waiting for a clever marketing campaign: the “designated driver” idea managed to get itself into The Cosby Show and Cheers.
But we also have to recognise the perverse incentives at work. Many of us want to be distracted less by our phones — not just while driving, but in meetings, during conversations, at mealtimes and in the bedroom. The phones themselves want something rather different. Distracted driving is an acute symptom of a wider problem: distracted living.
Written for and first published in the Financial Times.
My book “Messy” is available online in the US and UK or in good bookshops everywhere.
Free email updates
(You can unsubscribe at any time)
Email Address
March 1, 2017
What raspberry farms can teach us about inequality
Raspberries are a petit-bourgeois crop, while wheat is a proletarian crop — or so says political scientist James C Scott in his remarkable 1998 book Seeing Like a State (UK) (US). That makes it sound as though Scott is musing on matters of taste. In fact, he’s highlighting the link between what we produce, and the political and economic structures that production makes possible. Wheat is a proletarian crop, says Scott, because it works well on industrial farms. Harvesting can be mechanised. Not so easy with raspberries, which are best cared for on a small farm. They are difficult to grow and pick on an industrial scale.
Such distinctions once mattered a great deal. We associate the invention of agriculture with the rise of ancient states but, as Scott points out in a forthcoming book, Against the Grain, much depends on the crop. Wheat is well-suited to supporting state armies and tax inspectors: it is harvested at a predictable time and can be stored — or confiscated. Cassava works differently. It can be left in the ground and dug up when needed. If some distant king wanted to tax the cassava crop, his armies would have had to find them and dig them up one by one. Agriculture made strong states possible, but it was always agriculture based on grain. “History records no cassava states,” he writes.
The technologies we use have always affected who gets what, from the invention of the plough to the creation of YouTube. Economists know this but our analytical tools are not well-suited to distinguishing wheat from raspberries or cassava. The brilliance of gross domestic product is the way it manages to measure all economic activity with the same yardstick — but that is also, of course, its weakness. Nevertheless, we try. Many researchers have examined whether countries with rich endowments of mineral resources — oil, copper, diamonds — tend to do better or worse as a result. The balance of opinion is that there’s a “resource curse”. Why?
Sometimes the problem is obvious enough — for example, natural resources sustained a quarter-century of civil war in Angola, where the government could fund itself with oil while the rebels mined and sold diamonds. Sometimes it’s more subtle: a country that exports a valuable commodity will experience a strengthening of its exchange rate. This makes it harder to sustain any sort of industry that isn’t connected to the commodity itself.
Still, we’ve lacked the statistical tools to paint a compelling picture of these issues, important though they seem to be. Now a new research paper from a team at the Massachusetts Institute of Technology tries to explore how the mixture of products a country produces might influence a critical economic outcome: income inequality. The team includes César Hidalgo, author of Why Information Grows (UK) (US), about whose work I’ve written several times. Over the past few years, Hidalgo has been trying to map what he calls “economic complexity”, using statistical techniques from physics rather than economics.
Complexity isn’t straightforward to measure — is a million dollars of reinsurance more or less complex than a million dollars of liquefied natural gas or a million dollars of computer games? Hidalgo’s method looks at a country’s merchandise exports. Sophisticated economies tend to export many different products, including the most complex. Complex products tend to be exported only by a few economies.
In previous work, Hidalgo and colleagues have shown that economic complexity is correlated with wealth, but there are some economies that are spectacularly sophisticated but only modestly wealthy (South Korea is one) while other economies are very rich but not especially sophisticated (such as Qatar). This new analysis finds a relationship between inequality and lack of economic complexity.
Holding other things constant, the simplest economies tend to be the most unequal; the more sophisticated ones tend to be more equal. It’s raspberries and wheat all over again. Or, if you prefer, the difference between a business such as oil (which employs a few people at high wages), textile work (which generates lots of jobs, but at low wages) and making precision components (which requires many skilled and well-paid workers). The oil-based economy will tend to be the most unequal, while the precision-engineering economy will tend to be the most equal.
There are exceptions: Australia’s economy is surprisingly simple thanks to a dependence on natural resources, but not especially unequal. Mexico is an outlier in the other direction, with a sophisticated but unequal economy. This research answers some questions and raises others. There’s a large and unsatisfying literature on the relationship between inequality and growth. Are unequal societies dynamic and entrepreneurial or dysfunctional patron-client states? The MIT study suggests that what’s been missing from these questions is a measure of economic complexity.
And what about financial services? They seem both sophisticated and highly unequal — an exception to the rule? Hidalgo’s data are silent on the topic. But Hidalgo himself isn’t persuaded that banking is particularly complex.
“Most countries have financial services,” he tells me. “But few countries know how to design new microprocessors or new medicines.” By that measure, and others, he thinks financial services are cruder than we tend to think. Perhaps. If so, the City of London has more in common with the oilfields of the North Sea than we are inclined to admit.
Written for and first published in the Financial Times.
My book “Messy” is available online in the US and UK or in good bookshops everywhere.
Free email updates
(You can unsubscribe at any time)
Email Address
February 23, 2017
Kenneth Arrow, economist, 1921-2017
Kenneth Arrow, who has died aged 95 at his home in Palo Alto, California, on Tuesday was a towering figure in 20th century economics. In 1972, at the age of 51, he won one of the first Nobel memorial prizes in economics, the youngest winner then or since. Yet even a Nobel Prize understates Arrow’s contribution to economic theory. A brilliant mathematician, he ranged widely, breaking ground in areas that have subsequently yielded many further Nobels, including risk, innovation, health economics and economic growth.
Two achievements are particularly celebrated: his impossibility theorem about the paradoxes of social choice, and his welfare theorems, which formalised the most famous intuition in economics — Adam Smith’s idea that a market produces social good from individual selfishness.
Born in New York on August 23 1921 to immigrant parents, Kenneth Joseph Arrow had his formative experiences shaped by poverty — his businessman father “lost everything” in the Depression. But Arrow flourished at school and received an MA in mathematics from Columbia University at the age of 19. He interrupted his graduate studies to serve as a wartime weather researcher and US Army Air Corps captain.
His doctorate, published in 1951, made up for lost time. The thesis explored the problem of turning individuals’ preferences into a picture of what a society as a whole preferred. Scholars had long known that voting systems could produce perverse results but Arrow went further, showing that the very idea of “what society prefers” was incoherent. He set out four reasonable sounding requirements for building social preferences from individual ones — and proved that no system could satisfy all four of those requirements.
Arrow then turned to the familiar problem of supply and demand. In a well-functioning market for a single good such as apples, there is an efficient outcome with a price at which the number of apples supplied would equal the number of apples demanded.
But that was just one market. It was influenced by the market for pears, for agricultural land, for farm labourers and even for bank loans. Each market pushed and pulled others. What happened when one considered the interactions between every market in the world?
Working at times with the French economist Gérard Debreu, Arrow demonstrated that the intuitions from a single market could be generalised. First, there was a general equilibrium at which prices equalised supply and demand in every market at once. Second, this equilibrium was efficient. And third, any efficient allocation of resources could be reached by redistributing wealth and then letting competitive markets take over. Markets could still fail, but Arrow’s analysis explained the circumstances under which they would succeed.
Alongside such deep theoretical work, Arrow made many contributions to practical economic problems from insurance to healthcare to climate change. On occasion he took an active role on politically contentious issues, and was co-author of the 1997 “Economists’ Statement on Climate Change”, which warned of the dangers of global warming.
He was also noted for his love of gossip and his quick wit. One story tells of Arrow and a colleague waiting for an elevator to take them down, while several passed them going up. The colleague wondered aloud why everyone was going up. The immediate reply: “You’re confusing supply with demand.”
Arrow spent most of his career at Stanford University, apart from an 11-year spell at Harvard. He married Selma Schweitzer in 1947; she died in 2015. He is survived by his sons David and Andrew. He is also survived by his sister Anita, who married Robert Summers, a noted economist and brother of Nobel laureate Paul Samuelson. Her son, Arrow’s nephew, is the former US Treasury secretary Lawrence Summers.
Written for and first published in the Financial Times.
Free email updates
(You can unsubscribe at any time)
Email Address
February 22, 2017
How to catch a cheat
Should the rules and targets we set up be precise, clear and sophisticated? Or should they be vague, ambiguous and crude? I used to think that the answer was obvious — who would favour ambiguity over clarity? Now I am not so sure.
Ponder the scandal that engulfed Volkswagen in late 2015, when it emerged that the company had been cheating on US emissions tests. What made such cheating possible was the fact that the tests were absurdly predictable — a series of pre-determined manoeuvres on a treadmill. VW’s vehicles, kitted out with sensors as all modern cars are, were programmed to recognise the choreography of a laboratory test and switch to special testing mode — one that made the engine sluggish and thirsty, but that filtered out pollutants.
The trick was revealed when a non-profit group strapped emissions monitors to VW cars and drove them from San Diego to Seattle. In some ways, that’s a crude test: outside the laboratory, no two journeys can be compared precisely. But the cruder test was also the test that revealed the duplicity.
The VW case seems like a strange one-off. It isn’t. Consider the “stress tests” applied by regulators to large banks. These stress tests are disaster scenarios in which a bank calculates what would happen in particular gloomy situations. But, in 2014, US regulators started to notice that banks had made very specific, narrow bets designed to pay off gloriously in specific stress-test scenarios. There is no commercial logic for these bets — but they certainly make it easier to pass the stress test. VW all over again — with the difference that what the banks were doing was apparently perfectly legal.
If tests and targets can fail because they are too predictable, they can also fail because they are too narrow. A few years ago, UK ambulance services were set a target to respond to life-threatening situations within eight minutes of receiving an emergency call. Managers soon realised that they could hit the target more easily if they replaced a two-person ambulance with an independent pair of paramedics on bikes. And many responses were written down as seven minutes and 59 seconds, but few as eight minutes and one second — suspiciously timely work.
Perhaps we’d be better off handing over the problem to computers. Armed with a large data set, the computer can figure out who deserves to be rewarded or punished. This is a fashionable idea. As Cathy O’Neil describes in her recent book, Weapons of Math Destruction (UK) (US), such data-driven algorithms are being used to identify which prisoners receive parole and which teachers are sacked for incompetence.
These algorithms aren’t transparent — they’re black boxes, immune from direct scrutiny. The advantage of that is that they can be harder to outwit. But that does not necessarily mean they work well. Consider the accuracy of the recommendations that a website such as Amazon serves up. Sometimes these suggestions are pretty good, but not always. At the moment, Amazon is recommending that I buy a brand of baby bottle cleanser. I’ve no idea why, since all my children are of school age.
A teacher-firing algorithm might look at student test scores at the beginning and end of each school year. If the scores stagnate, the teacher is presumed to be responsible. It’s easy to see how such algorithms can backfire. Partly, the data are noisy. In a data set of 300,000, analysts can pinpoint patterns with great confidence. But with a class of 30, a bit of bad luck can cost a teacher his or her job. And perhaps it isn’t bad luck at all: if the previous year’s teacher somehow managed to fix the test results (it happens), then the new teacher will inherit an impossible benchmark from which to improve.
Just like humans, algorithms aren’t perfect. Amazon’s “you might want to buy bottle cleanser” is not a serious error. “You’re fired” might be, which means we need some kind of oversight or appeal process if imperfect algorithms are to make consequential decisions.
Even if an algorithm flawlessly linked a teacher’s actions to the students’ test scores, we should still use it with caution. We rely on teachers to do many things for the students in their class, not just boost their test scores. Rewarding teachers too tightly for test scores encourages them to neglect everything we value but cannot measure.
The economists Oliver Hart and Bengt Holmström have been exploring this sort of territory for decades, and were awarded the 2016 Nobel Memorial Prize in Economics for their pains. But, all too often, politicians, regulators and managers ignore well-established lessons.
In fairness, there often are no simple answers. In the case of VW, transparency was the enemy: regulators should have been vaguer about the emissions test to prevent cheating. But in the case of teachers, more transparency rather than less would help to uncover problems in the teacher evaluation algorithm.
Sometimes algorithms are too simplistic, but on occasions simple rules can work brilliantly. The psychologist Gerd Gigerenzer has assembled a large collection of rules of thumb that perform very well in predicting anything from avalanches to heart attacks. The truth is that the world can be a messy place. When our response is a tidy structure of targets and checkboxes, the problems really begin.
Written for and first published in the Financial Times.
My book “Messy” is available online in the US and UK or in good bookshops everywhere.
Free email updates
(You can unsubscribe at any time)
Email Address
February 16, 2017
What makes the perfect office?
In 1923, the father of modern architecture, Le Corbusier, was commissioned by a French industrialist to design some homes for workers in his factory near Bordeaux. Le Corbusier duly delivered brightly-hued concrete blocks of pure modernism. The humble factory workers did not take to Le Corbusier’s visionary geometry. They added rustic shutters, pitched roofs, and picket-fenced gardens. And they decorated the gardens in the least modernist way imaginable: with gnomes.
Companies no longer hire star architects to design housing for an industrial workforce. The architects are instead put to work producing the most magazine-shoot worthy office spaces. A pioneer was the uber-cool advertising agency, Chiat-Day, which in 1993 hired the playful Italian architect Gaetano Pesce to create a New York space for them (hot-lips mural, luminous floor, spring-loaded chairs). Their Los Angeles office (four-storey binoculars, brainstorming pods commandeered from fairground rides) was designed by Frank Gehry, whom Chiat-Day’s boss, Jay Chiat, had spotted before Gehry created the Guggenheim Bilbao and became the most famous architect on the planet.
Jay Chiat believed that design was for the professionals. Give workers control over their own space and they would simply clutter up Frank Gehry’s vision, so Jay Chiat decreed that his employees be given tiny lockers for “their dog pictures, or whatever”.
Now everyone is hiring the high priests of architecture. Google has asked Thomas Heatherwick, creator of the 2012 Olympic torch, to create a new Googleplex. Apple’s new headquarters will be a gigantic glass donut over a mile around, designed by Norman Foster and partners.
The most famous corporate architect was not an architect at all: the late Steve Jobs, the boss of Apple, owned much of the film studio Pixar and stamped his taste all over Pixar’s headquarters. Jobs pored over the finest details, choosing an Arkansas steel mill that produced steels of the perfect hue (bolted, not welded).
Jobs believed that a building could shape the way people interacted with each other, and hit upon the notion that Pixar would have just a single pair of washrooms, just off the main lobby. Every time nature called, there was only one place for the entire company to go, and serendipitous new connections would be made.
But what if all these efforts are basically repeating Le Corbusier’s error? What if the ideal office isn’t the coolest or the most aesthetically visionary? What if the ideal office is the one, dog pictures and gnomes and all, that workers make their own?
In 2010, two psychologists conducted an experiment to test that idea. Alex Haslam and Craig Knight set up simple office spaces where they asked experimental subjects to spend an hour doing simple administrative tasks. Haslam and Knight wanted to understand what sort of office space made people productive and happy, and they tested four different layouts.
Two of the layouts were familiar. One was stripped down – bare desk, swivel chair, pencil, paper, nothing else. Most participants found it rather oppressive. “You couldn’t relax in it,” said one. The other layout was softened with pot plants and tasteful close-up photographs of flowers, faintly reminiscent of Georgia O’Keefe paintings. Workers got more and better work done there, and enjoyed themselves more.
The next two layouts produced dramatically different outcomes – and yet, photographs of the spaces would offer few clues as to why. They used the same basic elements and the same botanical decorations. But the appearance wasn’t what mattered; what mattered was who got to decide.
In the third and fourth layouts, workers were given the plants and pictures and invited to use them to decorate the space – or not – before they started work. But in the fourth, the experimenter came in after the subject had finished setting everything out to her satisfaction, and then rearranged it all. The office space itself was not much different, but difference in productivity and job satisfaction was dramatic.
When workers were empowered to design their own space, they had fun and worked hard and accurately, producing 30 per cent more work than in the minimalist office and 15 per cent more than in the decorated office. When workers were deliberately disempowered, their work suffered and of course, they hated it. “I wanted to hit you,” one participant later admitted to an experimenter.
Haslam and Knight have confirmed what other researchers have long suspected – that lack of control over one’s physical environment is stressful and distracting. But this perspective is in stark contrast to those who see office design as too important to be left to the people who work in offices.
At least Le Corbusier had a vision, but many office spaces are ruled instead by an aesthetic that is mean and petty. The Wall Street Journal reported on Kyocera’s clipboard-wielding “inspectors” not only enforcing a clear-desk policy, but pulling open drawers and cabinets, photographing messy contents and demanding improvements. The Australian Financial Review published an 11-page clean-desk manual leaked from the mining giant BHP Billiton; apparently copper and coal cannot be mined if office workers do not respect the limit of one A5 picture frame on each desk. (The frame may display a family photo, or an award certificate, but it was forbidden to display both). Haslam and Knight told of a Sydney-based bank that changed the layout of its IT department 36 times in four years at the whim of senior management.
It is unclear why any of this top-down design is thought desirable. Official explanations are often empty or circular: that clean desks are more professional, or look tidier. In some cases, streamlined practices from the production line have been copied mindlessly into general office spaces, where they serve no purpose. Whatever the reason, it is folly. It can be satisfying to straighten up all the pens on your desk; but to order an underling to straighten their own pens is sociopathic.
When the likes of Steve Jobs or Frank Gehry are in charge, we can at least expect a workplace that will look beautiful. But that does not make it functional. Truly creative spaces aren’t constantly being made over for photoshoots in glossy business magazines. Just ask veterans of M.I.T., many of whom will identify as their favourite and most creative space a building that didn’t even have a proper name, which was designed in an afternoon and built to last just a couple of years. Building 20 was 200,000 square feet of plywood, cinderblock and asbestos, a squat, dusty firetrap originally designed to accommodate the wartime radar research effort, but which eked out an existence as M.I.T.’s junk-filled attic until 1998.
Building 20 was an unbelievably fertile mess. The successes started with the wartime RadLab, which produced nine Nobel prizes and the radar systems that won the second world war. But the outpouring continued for more than half a century. The first commercial atomic clock; one of the earliest particle accelerators; Harold Edgerton’s iconic high-speed photographs of a bullet passing through an apple – all sprang from Building 20. So did computer hacking and the first arcade video game, Spacewar. So did the pioneering technology companies DEC, BBN, and Bose. Cognitive science was revolutionised in Building 20 by the researcher Jerry Lettvin, while Noam Chomsky did the same for linguistics.
All this happened in the cheapest, nastiest space that M.I.T. could offer. But that was no coincidence. Building 20 was where the university put odd projects, student hobbyists and anything else that didn’t seem to matter, producing new collaborations.
And Building 20’s ugliness was functional. The water and cabling was exposed, running across the ceilings in brackets. Researchers thought nothing of tapping in to them for their experimental needs – or for that matter for knocking down a wall. When the atomic clock was being developed, the team removed two floors to accommodate it. This was the result not of design but of neglect. In the words of Stewart Brand, author of How Buildings Learn, “nobody cares what you do in there.”
And that was all Building 20’s residents wanted: to be left alone to create, to make whatever mess they wanted to make. When, inevitably, M.I.T. finally replaced Building 20 with a $300m structure designed by Frank Gehry himself, its former residents held a memorial wake. The new building might have been cutting-edge architecture, but one unhappy resident summed up the problem perfectly: “I didn’t ask for it.”
Of course nobody cares what the people who actually do the work might want or need. Chief executives exult in bold architectural statements, and universities find it easier to raise money for new buildings than for research. And so the grand buildings continue to be built, especially by the most profitable companies and the most prestigious seats of learning.
But we’re often guilty of confusing causation here, believing that great architecture underpins the success of great universities, or that Google flourishes because of the vibrancy of the helter skelters and ping pong tables in the Googleplex. A moment’s reflection reminds us that the innovation comes first, and the stunt architecture comes later.
Remember that for the first two years of Google’s history, there were no headquarters at all. The company’s founders, Sergey Brin and Larry Page, made the breakthroughs at Stanford University. Then came the cliché of a garage in Menlo Park, with desks made from doors set horizontally across sawhorses. The company grew and grew, into one crude space after another – and with engineers always free to hack things about. One knocked down the wall of his office, decided he didn’t like the results, and rebuilt the wall. That made for an ugly space – but a space that worked for the people who worked in it. The spirit of Building 20 lived on at Google.
So how should the ideal office look? In the most prestigious offices at the most prestigious companies, the ones which are being photographed by Wired, the answer to that question is: this place should look the way the boss’s pet architect wants it to look.
But Building 20, and Google’s early offices, and some of the great creative spaces all over the world, suggest a very different answer to the same question: how this place looks doesn’t matter.
Back in 1977, the editor of Psychology Today, T George Harris, put his finger on the problem:
“The office is a highly personal tool shop, often the home of the soul… this fact may sound simple, but it eludes most architects… They have a mania for uniformity, in space as in furniture, and a horror over how the messy side of human nature clutters up an office landscape that would otherwise be as tidy as a national cemetery.”
Harris scoured the academic literature for any evidence that good design helped people to get things done, or to be happier in the office. He couldn’t find it. “People suddenly put into “good design” did not seem to wake up and love it,” he wrote. What people love, instead, is the ability to control the space in which they work – even if they end up filling the space with kitsch, or dog photos, or even – shudder – garden gnomes.
Strangely enough, it was Steve Jobs himself – notorious as a dictatorial arbiter of good taste – who came to appreciate this at Pixar. When he unveiled his plan for the single pair of serendipity-inducing uber-bathrooms, he faced a rebellion from pregnant women at Pixar who didn’t want to have to make the long walk ten times a day. Jobs was aghast that people didn’t appreciate the importance of his vision. But then he did something unexpected: he backed down and agreed to install extra bathrooms.
Steve Jobs found other ways to encourage serendipitous interactions. More importantly, he showed that even on a question that mattered deeply to him, junior staff were able to defy him. Milled Arkansas steels be damned: it is the autonomy that really matters.
“The animators who work here are free to – no, encouraged to – decorate their work spaces in whatever style they wish,” explains Pixar’s boss Ed Catmull in his book Creativity, Inc. “They spend their days inside pink dollhouses whose ceilings are hung with miniature chandeliers, tiki huts made of real bamboo, and castles whose meticulously painted, fifteen-foot-high Styrofoam turrets appear to be carved from stone.”
I suspect that there may be a garden gnome in there, too.
The ideas in this article are adapted from my book “Messy“, which is available online in the US and UK or in good bookshops everywhere.
Free email updates
(You can unsubscribe at any time)
Email Address


