Tim Harford's Blog, page 17

March 28, 2024

Cautionary Tales – Inside the Bizarre World of Dictators

Why are so many autocrats germaphobes? Why was the truth so dangerous for Soviet engineers? And what can salami reveal to us about the mind of Vladimir Putin?

Tim Harford, host of the Cautionary Tales podcast, examines the true stories behind the HBO series The Regime. In the first of two special episodes, Tim investigates real-life dictatorships and the social science that explains them, drawing on insights from game theory and psychology.

[Apple] [Spotify] [Stitcher]

Further Reading

The discussion of salami slicing drew from Thomas Schelling’s book Arms and Influence, and How Democracies Die by Steven Levitsky and Daniel Ziblatt. Statistics on public opinion about democracy come from the OSF Barometer. John Simpson wrote for the BBC about his experience at the Crimea checkpoint, with our other sources on the 2014 annexation including Radio Free EuropeBrookings and the Financial Times. Richard W Maass discusses salami tactics and Crimea in the Texas National Security Review.

The section on germophobia was inspired by Randy Thornhill and Corey L. Fincher’s book The Parasite-Stress Theory of Values and Sociality, along with studies including The Psychological and Socio-Political Consequences of Infectious Diseases and Associations of political orientation, xenophobia, right-wing authoritarianism, and concern of COVID-19. Reports about the oddly germophobic behaviour of various dictators came from sources including the New York TimesABCThe GuardianBusiness InsiderVOA and UPI. The Ceausescu section, in particular, drew from The Life and Times of Nicolae Ceausescu by John Sweeney, Kiss the Hand You Cannot Bite by Edward Behr, and reporting in Harpers Bazaar.

The definitive account of Peter Palchinksy’s life and death is The Ghost of the Executed Engineer by Loren Graham. Steeltown, USSR by Stephen Kotkin relates what happened to Magnitogorsk. Amy Edmondson’s ideas are fully explored in her recent book The Right Kind of Wrong. On the Soviet census, see Andrew Whitby’s The Sum of the People.

 •  0 comments  •  flag
Share on Twitter
Published on March 28, 2024 22:01

The alternate universe in which Tottenham are top

Back in October, the headteacher at my son’s school began each assembly by displaying the Premier League table, with Tottenham Hotspur at the top. (My son, a fan of Tottenham’s local rivals Arsenal, was outraged.) Those familiar with English football will know that Tottenham were top of the league for much of October, but only those with long memories will recall the last time Spurs finished the season in that position. It was 1961.

Yet it doesn’t take much to produce an alternate universe in which Spurs are a winning machine. All you need to do is what the headteacher did: when Tottenham are winning, display the league table; when they are not, keep quiet. Recently, the headteacher has been quiet.

This behaviour has a name: publication bias. Nobody is likely to be fooled by a humorous school assembly into thinking that Tottenham will win the Premier League, but, in other contexts, publication bias is a serious business. When we are trying to make sense of the world, it matters that there is a systematic difference between the information that is put in front of us and the information that is obscured. We are surrounded by images and ideas that have been sieved through the deceptive filter of publication bias and, unlike the young football fans who know that Spurs don’t win many trophies, we typically lack the background knowledge to draw the right conclusions.

Publication bias is traditionally a concern in academic journals: surprising, exciting, novel and, in particular, statistically significant results tend to be published, while “null” findings, where the statistics demonstrate no clear effect, tend to languish in file drawers. This may sound like a minor annoyance, but, in reality, it leaves a perniciously misleading picture of the evidence that should be available.

To see why, replace “Tottenham lead the Premier League” with “new antidepressant is highly effective in clinical trials”. If trials that show no effect are unpublished, while those that find an effect are trumpeted, then the published evidence base is systematically biased and will lead to bad clinical decisions.

While publication bias is starkest and best studied in formal research, the same tendency applies much more broadly. Think about who we see when we turn on the television. People who appear on TV tend to be better looking and richer than the rest of us and, almost by definition, they are more famous. We are a social species and we often deal in social comparisons. If we compare ourselves not to our friends but to the celebrities we spend so much time watching, we may feel we don’t match up.

Or consider crime. In any country with a population of millions, there will be a steady stream of dreadful crimes. Such crimes are just common enough to appear every time you look at the news, while being just rare enough to be newsworthy. According to the Crime Survey for England and Wales, the UK’s most respected data series on crime, violent crime is down by more than 75 per cent since a peak in 1995; it is down by about half since 2010.

Yet surveys of public opinion frequently suggest that crime is a pressing concern, and the majority of people believe crime is rising. The likely explanation for this misperception is simply that we are surrounded by cop show dramas and by reports of ghastly crimes, rather than reports of banks unrobbed, houses unburgled and women who walked safely home at night. Our perceptions of crime don’t reflect reality, but they accurately match the news and entertainment with which we are presented.

Arguably, our own brains inflict a kind of publication bias on us every day, in the form of “the focusing illusion”. Whenever we contemplate a decision, we summon some considerations to mind while neglecting others. For example, when pondering whether to buy new garden furniture, we imagine a sunny weekend. We do not think of all the days when it will be cold and rainy, or those when we will need to be in the office, not the garden. In the words of Nobel laureate Daniel Kahneman, “Nothing in life is as important as you think it is, while you are thinking about it.”

I am not sure of any antidote to the fact that beautiful people dominate TV, but there is, at least, a well-understood treatment for publication bias in medicine: it is that every trial should be publicly registered before it begins (lest it go missing) and every trial should have results properly reported. The All Trials campaign was launched in 2013 to put pressure on pharmaceutical companies and universities to preregister every clinical trial and publish every result, and the campaign received further impetus when one of its co-founders, Ben Goldacre, led a team to design an automated audit system, Trials Tracker. Trials Tracker automatically checks that clinical trials in the US, EU and UK are being promptly reported.

Goldacre recently told me that a watershed moment came in 2019, when the UK’s Parliamentary Science and Technology Committee wrote to the medical schools in leading British universities. The committee chair warned them that the committee had been studying the Trials Tracker data, and would soon be inviting the biggest laggards to give evidence in person.

“In some respects that was a bit unhelpful to me,” Goldacre deadpanned, “because, at the time, I didn’t have a permanent [academic] post and that sort of thing does slightly annoy deans of medical schools and makes people a bit cross and sad.”

But the message was received. Faced with the combination of clear metrics and the threat of public shaming, UK universities suddenly discovered a new zeal for reporting their clinical trials. According to EU Trials Tracker, they now boast an excellent record of publishing every result, as do pharmaceutical companies. If only the same was true of headteachers.

Written for and first published in the Financial Times on 1 March 2024.

My first children’s book, The Truth Detective is now available (not US or Canada yet – sorry).

I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.

3 likes ·   •  0 comments  •  flag
Share on Twitter
Published on March 28, 2024 09:53

March 21, 2024

Why friends are always right – no matter their views

My colleague John Burn-Murdoch recently presented striking evidence of a new trend: young men and young women are becoming politically segregated. Young men now sit substantially to the right of young women on the political spectrum. This is an international phenomenon and it’s new.

Should we be surprised? Society seems to be polarising along every possible axis and on every conceivable issue. Consider the apparently simple question of how the US economy is faring. The answer is simple: it depends whether the sitting president is on your team or not. Little else matters. From the public’s perspective anyway.

According to Gallup, Democrats are 57 percentage points more likely than Republicans to say that the economy is improving. Wind back four years, to early 2020 when Donald Trump rather than Joe Biden was president, and you find a very similar gap: 54 percentage points. Back then, naturally, it was the Republicans who believed the economy was improving.

To pick another issue, should there be a memorial for those killed by the Covid-19 pandemic? The death toll in the US alone is more than a million people. That seems like it might be worth some sort of public monument, but what should it say and how? The podcast 99% Invisible recently followed the efforts of bereaved families to galvanise support for something more than a national memory of “the time that we all couldn’t find fucking yeast”. But even a memorial is controversial. One Republican politician told the podcast he’d support a memorial that apologised for the Covid vaccine.

It is tempting to blame the politicians for all this polarisation. Yet if successful politicians are more inflammatory than they used to be, more keen to make enemies than friends, that is probably a response to something else. But what?

Consider a few thought-provoking findings from social science. Nearly twenty years ago three academics, Cass Sunstein, Reid Hastie and David Schkade, assembled focus groups from left-leaning Boulder, Colorado, and separately from conservative Colorado Springs. Participants were privately asked their views on politically heated topics, then put into groups with others from their town and asked to discuss the issues together.

We might hope that this process would lead people to question their certainties, making them more humble and perhaps pulling them towards the political centre. The opposite was true. Individuals from Boulder moved further to the left after discussing the matter with fellow Boulderites. They also became more similar, converging on a leftwing view. Finally, they became more confident that they were correct.

The mirror image applied to the participants from Colorado Springs. After discussion with others from their town they moved further to the right and became more certain of themselves. The two groups, not so different at the start, moved far apart as a result of exposure to other people with similar views. This process is known as “group polarisation”.

Another study examined student friendships. The researchers, Angela Bahns, Kate Pickett and Christian Crandall, compared the behaviour of students at small campuses, with about 500 students each, to the friendship structure at the University of Kansas, which has the student population in the tens of thousands. The researchers sought out pairs of people who were chatting in the student union or cafeteria and gathered a host of telling details: students’ age, sexual orientation, ethnicity, how much they drank, smoked or exercised and their attitudes to a variety of social and political questions. They were also asked about their friendships.

In principle, the University of Kansas offered a far greater diversity of views and lifestyles, with 25,000 possible friends to choose from. But in practice, students on the smaller campuses had more diverse friendship groups. The reason? On a large campus, students could find their social and ideological soulmates. On small campuses, they had less choice and so had to make friendships work even when they bridged social or ideological gaps.

Taken together, these studies suggest an unnervingly plausible two-part engine of polarisation: first, given the choice, we seek out other people like us. Then, being surrounded by people like us makes us more extreme in our views and more confident that those views are correct.

Our current information ecosystem offers us more choice than ever. Alongside social media we can pick and choose from websites, podcasts and YouTube channels to reflect any interest, geography and ideology. And how do we use that choice? Generally, by seeking out people who share our views, broadcasters who seem to “get” us and, often, by avoiding news altogether.

I am wary of blaming social media for all our ills. It can be a great source of support and information, particularly for people in an unusual situation: anything from having a disability to a minority sexual orientation to a niche hobby. There is a real benefit to being able to reach out and find like-minded people.

Yet we must acknowledge the risk that we are self-selecting into echo chambers. Social media algorithms may be giving us a push, recommending content to us that drives “engagement”, the most surprising, outrageous and often toxic material. But we shouldn’t blame algorithms steering us away from serious and thoughtful exposure to different points of view. We are quite capable of choosing that for ourselves.

Written for and first published in the Financial Times on 23 February 2024.

My first children’s book, The Truth Detective is now available (not US or Canada yet – sorry).

I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.

2 likes ·   •  0 comments  •  flag
Share on Twitter
Published on March 21, 2024 09:19

March 14, 2024

Cautionary Tales – Do Nothing, Then Do Less

Chuck Yeager’s plane pitched and rolled as it plummeted from the sky. He grappled with the controls inside the cockpit, but to no avail: he couldn’t steady the aircraft. The test pilot was known for his nerves of steel but, as the barren Mojave Desert hurtled towards him, even he was afraid. What to do?

It’s tempting to think that adding to our lives – more action, more work, more possessions – will lead to greater success and happiness. But sometimes doing less is the wiser choice, as Chuck Yeager was to learn the hard way.

In their second crossover episode, Tim Harford teams up with Dr. Laurie Santos (host of The Happiness Lab) to examine why subtraction can be so challenging and so helpful.

[Apple] [Spotify] [Stitcher]

Further Reading

Leidy Klotz, Subtract: The Untapped Power of Less

Marie Kondo, The Life-changing Magic of Tidying

Tim Harford, Messy

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2024 21:01

The real quandary of AI isn’t what people think

Do you think the leading large language model, GPT-4, could suggest a solution to Wordle after having four previous guesses described to it? Could it compose a biography-in-verse of Alan Turing, while also replacing “Turing” with “Church”? (Turing’s PhD supervisor was Alonzo Church, and the Church-Turing thesis is well known. That might befuddle the computer, no?) Shown a partially complete game of tic-tac-toe, could GPT-4 find the obvious best move?

All these questions, and more, are presented as an addictive quiz on the website of Nicholas Carlini, a researcher at Google Deepmind. It’s worth a few minutes of your time as an illustration of the astonishing capabilities and equally surprising incapabilities of GPT-4. For example, despite the fact that GPT-4 cannot count and often stumbles over basic maths, it can integrate the function x sin(x) — something I long ago forgot how to do. It is famously clever at wordplay yet flubs the Wordle challenge.

Most staggering of all, although GPT-4 cannot find the winning move at tic-tac-toe, it can “write a full javascript webpage to play tic-tac-toe against the computer” in which “the computer should play perfectly and so never lose” within seconds.

One comes away from Carlini’s test with three insights. First, not only can GPT-4 solve many problems that would stretch a human expert, it can do so a hundred times more quickly. Second, there are many other tasks at which GPT-4 makes mistakes that would embarrass a 10-year-old. Third, it is very hard to figure out which tasks fall into which category. With experience, one starts to get a feel for the weaknesses and the hidden superpowers of the large language model, but even experienced users will be surprised.

Carlini’s test illustrates a point that has been explored in a more realistic context by a team of researchers working with Boston Consulting Group (BCG). Their study focuses on why the strengths and weaknesses of generative AI are often unexpected. Fittingly, it is titled Navigating the Jagged Technological Frontier. At BCG, consultants armed with GPT-4 dramatically outperformed those without the tool. They were given a range of realistic tasks such as brainstorming product ideas, performing a market segmentation analysis and writing a press release. Those with GPT-4 did more work, more quickly and of much higher quality. GPT-4, it seems, is a terrific assistant to any management consultant, especially those with less skill or experience.

The researchers also included a task that it seemed the AI should find easy, but which was carefully designed to confound it. This was to make strategy recommendations to a client based on financial data and transcripts of interviews with staff. The trick was that the financial data was likely to be misleading unless viewed in the light of the interviews. This task wasn’t beyond a capable consultant, but it did fool the AI, which tended to give extremely bad strategic advice. The consultants were, of course, free to ignore the AI’s output, or even to cut the AI out entirely, but they rarely did. This was the one task at which the unaided consultants performed better than those equipped with GPT-4.

This is the “jagged frontier” of generative AI performance. Sometimes the AI is better than you, and sometimes you are better than the AI. Good luck guessing which is which.

This column is the third in a series about generative AI in which I have been scrambling to find technological precedents for the unprecedented. Still, even an imperfect analogy can be instructive. Looking at assistive fly-by-wire systems alerts us to the risk of complacency and deskilling; the sudden rise of the digital spreadsheet shows us how a technology can destroy what seems to be the foundations of an industry, yet end up expanding the number and range of new jobs in that industry.

This week, I’d like to suggest a final precursor: the iPhone. When Steve Jobs launched the genre-defining iPhone in 2007, few people imagined just how ubiquitous smartphones would become. At first they were little more than an expensive toy. The killer app was the ability to make them crackle and buzz like lightsabres. Yet soon enough, we were spending more time with our smartphones than with our loved ones, using them to replace the TV, radio, camera, laptop, satnav, Walkman, credit card — and above all, as an endless source of distraction.

Why suggest the iPhone might teach us something about generative AI? The technologies are different, true. But we might want to reflect on how quickly we became dependent on smartphones and how quickly we started to turn to them out of habit, rather than as a deliberate choice. We want company, but instead of meeting a friend we fire off a tweet. We want something to read, but rather than picking up a book, we doomscroll. Instead of a good movie, TikTok. Email and Whats­App become a substitute for doing real work. There will be a time and a place for generative AI, just as there is a time and a place to consult the supercomputer in your pocket. But it may not be easy to figure out when it will help us and when it will get in our way.

Unlike with generative AI, anybody with a pen, paper and three minutes to spare can write a list of what they do better with a smartphone in hand, and what they do better when the smartphone is out of sight. The challenge is to remember that list and act accordingly. The smartphone is a powerful tool that most of us unthinkingly misuse many times a day, despite the fact that it is far less mysterious than a large language model like GPT-4. Will we really do a better job with the AI tools to come?

Written for and first published in the Financial Times on 16 February 2024.

The paperback of “The Next 50 Things That Made The Modern Economy” is now out in the UK.

“Endlessly insightful and full of surprises — exactly what you would expect from Tim Harford.”- Bill Bryson

“Witty, informative and endlessly entertaining, this is popular economics at its most engaging.”- The Daily Mail

I’ve set up a storefront on Bookshop in the United States and the United Kingdom – have a look and see all my recommendations; Bookshop is set up to support local independent retailers. Links to Bookshop and Amazon may generate referral fees.

3 likes ·   •  0 comments  •  flag
Share on Twitter
Published on March 14, 2024 10:01

March 7, 2024

What the birth of the spreadsheet teaches us about generative AI

When the spreadsheet launched in 1979, it was a bewildering piece of software. People had no idea what they were looking at. A computer screen, filled with a grid of numbers? As Keith Houston explains in his new history of the pocket calculator, Empire of the Sum, they hadn’t realised that the rows and columns of a spreadsheet could be functional rather than decorative. Accustomed to writing numbers by hand on an 11-by-17 inch sheet of gridded paper designed for accountancy, they would type the same numbers into the computer grid and then do what they had done for the past couple of decades: figure out the sums with a calculator.

This posed quite the problem to Dan Bricklin, the inventor of the digital spreadsheet, and his colleagues Bob Frankston and Dan Fylstra. When Frankston presented their product, “VisiCalc”, at the National Computer Conference in 1979, the audience consisted almost entirely of friends and associates. Frankston counted only two strangers in the audience, both of whom left before the end.

Last week, I argued that for a glimpse at the future of generative AI, we should look for parallels in older technologies. By examining several earlier innovations, we can get some idea of the opportunities and the dangers ahead. This time, I want to examine Bricklin’s brainchild, the digital spreadsheet.

Despite its stuttering beginning, VisiCalc quickly became a phenomenon. Watching those two strangers walk out of his presentation in 1979, Bob Frankston could hardly have dared to hope that, three years later, Apple II computers were being sold as “VisiCalc accessories” — the $2,000 entry fee to get access to the spreadsheet, a $100 miracle. Unsurprisingly, it was the accountants who caught on first and drove demand.

Bricklin recalled in a 1989 interview with Byte magazine, “if you showed it to a person who had to do financial work with real spreadsheets, he’d start shaking and say, ‘I spent all week doing that.’ Then he’d shove his charge cards in your face.”

There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?

First, the right technology in the right place can take over very quickly indeed. In the time it takes to qualify as a chartered accountant, digital spreadsheets laid waste to a substantial industry of cognitive labour, of filling in rows and columns, pulling out electronic calculators and punching in the numbers. Accounting clerks became surplus to requirements, and the ability of a single worker to perform arithmetic was multiplied a thousandfold — and soon a millionfold — almost overnight.

The second lesson is that the effect on the labour market was not what we might have expected. The Bureau of Labor Statistics estimated that there were 339,000 accountants and accounting clerks working in the US in 1980, around the time VisiCalc started to take off. By 2022, the bureau tallied 1.4mn accountants and auditors. These two numbers aren’t directly comparable, but it is hard to argue that accountancy was decimated by the spreadsheet. Instead, there are more accountants than ever; they are merely outsourcing the arithmetic to the machine.

The spreadsheet also illuminates something we don’t yet know about generative AI — will it favour the underdog or the top dog? Will it reshape jobs to make them more interesting, or will it leave humans with the tedious tasks?

The digital spreadsheet is an example of a technology that automated the more tedious tasks in accountancy, burnishing jobs that were already well-paid and interesting. It may be that generative AI does something similar on a grander scale, letting the humans deal with the big creative questions while the machine handles the nagging details.

Generative AI has been tried in a variety of workplace experiments, for example, helping online tech-support workers troubleshoot customer problems. Early trials strongly suggest that the latest chatbots add to everyone’s productivity, but particularly to the productivity of the least skilled staff. That is encouraging, even if the current pace of change makes it too early to be entirely confident about the next step.

It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.

Looking at the way spreadsheets are used today certainly suggests a warning. They are endlessly misused by people who are not accountants and are not using the careful error-checking protocols built into accountancy for centuries. Famous economists using Excel simply failed to select the right cells for analysis. An investment bank used the wrong formula in a risk calculation, accidentally doubling the level of allowable risk-taking. Biologists have been typing the names of genes, only to have Excel autocorrect those names into dates.

When a tool is ubiquitous, and convenient, we kludge our way through without really understanding what the tool is doing or why. And that, as a parallel for generative AI, is alarmingly on the nose.

Written for and first published in the Financial Times on 9 February 2024.

The paperback of “The Next 50 Things That Made The Modern Economy” is now out in the UK.

“Endlessly insightful and full of surprises — exactly what you would expect from Tim Harford.”- Bill Bryson

“Witty, informative and endlessly entertaining, this is popular economics at its most engaging.”- The Daily Mail

I’ve set up a storefront on Bookshop in the United States and the United Kingdom – have a look and see all my recommendations; Bookshop is set up to support local independent retailers. Links to Bookshop and Amazon may generate referral fees.

3 likes ·   •  0 comments  •  flag
Share on Twitter
Published on March 07, 2024 04:44

February 29, 2024

Cautionary Tales – Buried Evil: V2 Rocket (Part 3)

As US troops approached a prison camp in Nazi Germany, they could hear agonized wailing. The stench of rotting flesh filled their nostrils. Moments later they discovered a pile of smoldering corpses, alongside emaciated survivors.

Next to the concentration camp they found something else: tunnels filled with tools — and partially assembled rockets. The soldiers had hit upon the evil heart of the V2 manufacturing program: enslaved laborers, imprisoned underground.

The rocket program’s director had already fled. Wernher von Braun now had just one concern: persuading the Americans to let him switch sides…

[Apple] [Spotify] [Stitcher]

Further reading

Essential sources for this series:

Murray Barber V2: The A4 Rocket from Peenemunde to Redstone

Norman Longmate Hitler’s Rockets

Jean Michel Dora

Michael Neufeld The Rocket and the Reich

Michael Neufeld Von Braun: Dreamer of Space, Engineer of War

Michael Neufeld also kindly agreed to be interviewed as background for the series.

Other sources include:

RV Jones Most Secret War

Steven Zaloga V1 Flying Bomb 1942-52

Steven Zaloga V2 Ballistic Missile 1942-52

Freeman Dyson Disturbing the Universe

Walter Dornberger V2

Daniel Lang “A Romantic Urge” The New Yorker 21 April 1950

Bent Flyvbjerg and Dan Gardner How Big Things Get Done

Diane Tedeschi interview with Michael Neufeld Smithsonian Magazine 1 Jan 2008

Michael Neufeld “Wernher von Braun, the SS and Concentration Camp Labor: Questions of Moral, Political and Criminal Responsibility.” German Studies Review. 25:57–78. 2002

Adam Tooze Wages of Destruction

Dean Reuter The Hdden Nazi

Brian Crim Our Germans

Annie Jacobsen Operation Paperclip: The Secret Intelligence Program That Brought Nazi Scientists to America

Steve Ossad “The Liberation of Nordhausen Concentration Camp

Amy Shira Teitel “The Nazi Smoke and Mirrors Escape That Launched American Into The Space Age” Motherboard, 15 September 2012

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on February 29, 2024 21:01

Of top-notch algorithms and zoned-out humans

On June 1 2009, Air France Flight 447 vanished on a routine transatlantic flight. The circumstances were mysterious until the black box flight recorder was recovered nearly two years later, and the awful truth became apparent: three highly trained pilots had crashed a fully functional aircraft into the ocean, killing all 288 people on board, because they had become confused by what their Airbus 330’s automated systems had been telling them.

I’ve recently found myself returning to the final moments of Flight 447, vividly described by articles in Popular Mechanics and Vanity Fair. I cannot shake the feeling that the accident has something important to teach us about both the risks and the enormous rewards of artificial intelligence.

The latest generative AI can produce poetry and art, while decision-making AI systems have the power to find useful patterns in a confusing mess of data. These new technologies have no obvious precursors, but they do have parallels. Not for nothing is Microsoft’s suite of AI tools now branded “Copilot”. “Autopilot” might be more accurate, but either way, it is an analogy worth examining.

Back to Flight 447. The A330 is renowned for being smooth and easy to fly, thanks to a sophisticated flight automation system called assistive fly-by-wire. Traditionally the pilot has direct control of the aircraft’s flaps, but an assistive fly-by-wire system translates the pilot’s jerky movements into smooth instructions. This makes it hard to crash an A330, and the plane had a superb safety record before the Air France tragedy. But, paradoxically, there is a risk to building a plane that protects pilots so assiduously from error. It means that when a challenge does occur, the pilots will have very little experience to draw on as they try to meet that challenge.

In the case of Flight 447, the challenge was a storm that blocked the airspeed instruments with ice. The system correctly concluded it was flying on unreliable data and, as programmed, handed full control to the pilot. Alas, the young pilot was not used to flying in thin, turbulent air without the computer’s supervision and began to make mistakes. As the plane wobbled alarmingly, he climbed out of instinct and stalled the plane — something that would have been impossible if the assistive fly-by-wire had been operating normally. The other pilots became so confused and distrustful of the plane’s instruments, that they were unable to diagnose the easily remedied problem until it was too late.

This problem is sometimes termed “the paradox of automation”. An automated system can assist humans or even replace human judgment. But this means that humans may forget their skills or simply stop paying attention. When the computer needs human intervention, the humans may no longer be up to the job. Better automated systems mean these cases become rare and stranger, and humans even less likely to cope with them.

There is plenty of anecdotal evidence of this happening with the latest AI systems. Consider the hapless lawyers who turned to ChatGPT for help in formulating a case, only to find that it had fabricated citations. They were fined $5,000 and ordered to write letters to several judges to explain.

The point is not that ChatGPT is useless, any more than assistive fly-by-wire is useless. They are both technological miracles. But they have limits, and if their human users do not understand those limits, disaster may ensue.

Evidence of this risk comes from Fabrizio Dell’Acqua of Harvard Business School, who recently ran an experiment in which recruiters were assisted by algorithms, some excellent and some less so, in their efforts to decide which applicants to invite to interview. (This is not generative AI, but it is a major real-world application of AI.)

Dell’Acqua discovered, counter-intuitively, that mediocre algorithms that were about 75 per cent accurate delivered better results than good ones that had an accuracy of about 85 per cent. The simple reason is that when recruiters were offered guidance from an algorithm that was known to be patchy, they stayed focused and added their own judgment and expertise. When recruiters were offered guidance from an algorithm they knew to be excellent, they sat back and let the computer make the decisions.

Maybe they saved so much time that the mistakes were worth it. But there certainly were mistakes. A low-grade algorithm and a switched-on human make better decisions together than a top-notch algorithm with a zoned-out human. And when the algorithm is top-notch, a zoned-out human turns out to be what you get. Recommended The Big Read Generative AI: how will the new era of machine learning affect you?

I heard about Dell’Acqua’s research from Ethan Mollick, author of the forthcoming Co-Intelligence. But when I mentioned to Mollick the idea that the autopilot was an instructive analogy to generative AI, he warned me against looking for parallels that were “narrow and somewhat comforting”. That’s fair. There is no single technological precedent that does justice to the rapid advancement and the bewildering scope of generative AI systems. But rather than dismiss all such precedents, it’s worth looking for different analogies that illuminate different parts of what might lie ahead. I have two more in mind for future exploration.

And there is one lesson from the autopilot I am convinced applies to generative AI: rather than thinking of the machine as a replacement for the human, the most interesting questions focus on the sometimes-fraught collaboration between the two. Even the best autopilot sometimes needs human judgment. Will we be ready?

The new generative AI systems are often bewildering. But we have the luxury of time to experiment with them; more than poor Pierre-Cédric Bonin, the young pilot who flew a perfectly operational aircraft into the Atlantic Ocean. His final words: “But what’s happening?”

Written for and first published in the Financial Times on 2 Feb 2024.

My first children’s book, The Truth Detective is now available (not US or Canada yet – sorry).

I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on February 29, 2024 09:19

February 22, 2024

The secret to finding the best idea ever? First think about the absolute worst

When he first heard the music, Brian Eno grabbed a copy of the single and ran to find David Bowie. “I’ve found the sound of the future,” he breathlessly announced. It was 1977, and the sound of the future was “I Feel Love”. Donna Summer’s ethereal vocals were backed by producer Giorgio Moroder’s pulsing, looping Moog synthesiser.

Moroder offers a curious account of his inspiration. He says he went to see the film of the year, Star Wars, and took note of the scene in the Mos Eisley Cantina, a wretched hive of scum and villainy in which a band of strange alien musicians perform a jaunty tune. (One wonders: Star Wars was released after “I Feel Love”; but Moroder’s hazy memories are still instructive.)

“I didn’t think it was the music of the future,” Moroder has recalled, which was true. (The aliens are playing woodwind, while the Star Wars composer John Williams was influenced by 1940s swing when he penned their tune.) If Moroder wanted to create the music of the future for Donna Summer, he needed to do the opposite of that. “I Feel Love” has no band and no conventional instruments except a kick drum.

There is a lot more to “I Feel Love” than simply inverting John Williams’s pastiche of Benny Goodman, but the story illustrates the curious power of turning an idea on its head. If you want to produce the music of the future, ponder the music of the past, then do the opposite.

Creativity gurus have termed this approach looking for the “worst possible idea”. When looking for a business breakthrough, a design innovation or a flash of artistic insight, it can be hard to think of much when challenged to dream up a good idea. Far easier, less intimidating and more fun to write down a list of terrible ideas, then see what those terrible ideas suggest once you turn them on their heads.

Sometimes the worst-idea exercise is little more than a warm-up, getting the imaginative sparks flying before the real creative work begins. Sometimes, however, focusing on what makes an idea bad shines the spotlight on what might make an idea good. If the bad idea is to make a product look clumsy and ugly, that suggests it’s worth paying more attention to the product’s elegance and beauty. If the bad idea is to ship the product with bewildering instructions, the good idea is to hire an editor to hone the instructions.

Even more intriguing is when the bad idea is actually a great idea. Imagine a “worst possible idea” brainstorm for a restaurant. What if the waiters in your restaurant were rude to the customers? Bad idea. Except that was the signature method of Wong Kei’s, located in London’s Chinatown, and the sheer drama of it made Wong Kei’s a popular destination for decades.

What if customers weren’t given a choice and told they would just have to eat whatever the kitchen prepared? Nowadays, we call that the tasting menu and charge a premium for it. What if customers had to pay up front before they were even allowed on the premises? Come to think of it, that is how a theatre or a concert works, and it would mean that customers ended their evening with dessert rather than by trying to figure out how to split the bill. Sometimes the opposite of a good idea is another good idea.

For another example, how might you flip the script on a job interview? The traditional interview is a gruelling experience, an inquisition in which the interviewers probe applicants for weakness under pressure. But you wouldn’t try that on a first date, so why exactly is it such a brilliant idea when seeking new colleagues? In his recent book, Hidden Potential, the psychologist Adam Grant outlines an alternative: what about a job interview where the aim was to make the candidate as comfortable as possible and to help them demonstrate strengths rather than reveal weaknesses?

Grant describes the recruitment process of Call Yachol, a call centre based in Israel. Candidates are invited to bring a friend or a pet along to the interview and given the opportunity to display relevant skills in familiar contexts. Most strikingly, the interview is turned upside-down at the end. Candidates are asked to rate their interview experience. Were they made to feel welcome? Did they get an opportunity to show the best of their abilities? If not, would they like to come back another day and do it all again? What should the interviewers do differently when they do?

Call Yachol bends over backwards because it is staffed entirely by people with disabilities, and it wants to make sure it doesn’t accidentally disadvantage a candidate who cannot see, cannot hear or is neurodiverse. But reading Grant’s description of the process made me wonder whether something closer to Call Yachol’s approach might work better for everyone. After all, nobody seems particularly delighted with how well the pressure-interrogation model of recruitment works. Why not try a different way of trying to find talented people?

So let’s hear it for wilful contrarianism in the search for inspiration. Sometimes looking for a bad idea gets the creative juices flowing. Sometimes a bad idea perfectly highlights what would make an idea good.

And sometimes, the bad idea is actually the best idea of all. What about a movie set in space that isn’t futuristic, but features wizards and sword fights? What if most of the technology looks like it emerged from the second world war, rugged, grimy, pragmatic, with few visible computers? And what if the aliens are playing something like Benny Goodman? “Space, but old-fashioned” turns out to be an inspired move. It’s why the aesthetic of Star Wars has stood the test of time almost as well as “I Feel Love”.

Written for and first published in the Financial Times on 26 January 2024.

The paperback of “The Next 50 Things That Made The Modern Economy” is now out in the UK.

“Endlessly insightful and full of surprises — exactly what you would expect from Tim Harford.”- Bill Bryson

“Witty, informative and endlessly entertaining, this is popular economics at its most engaging.”- The Daily Mail

I’ve set up a storefront on Bookshop in the United States and the United Kingdom – have a look and see all my recommendations; Bookshop is set up to support local independent retailers. Links to Bookshop and Amazon may generate referral fees.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on February 22, 2024 08:28

February 21, 2024

Fake: It’s only a matter of time until disinformation leads to calamity

Not long after Eric Hebborn was murdered, an off-the-record conversation with the famed artist-turned-forger was published. On tape, Hebborn made explosive claims about his time as a student at the Royal Academy of Art in the 1950s, where he had been awarded a prestigious prize. Though a gifted draughtsman, he was a surprising choice, because the art of the day was all about high concepts, not realistic depictions. Drawing was an unfashionable business, so how had a mere draughtsman won the prize?

Hebborn explained that, one day, a drunken porter at the Royal Academy was looking for a quiet spot to sleep in the basement and had fashioned a screen made of some of the pictures stored down there. One of those was the only surviving large drawing by Leonardo da Vinci, known as the Burlington House Cartoon, after the Royal Academy’s headquarters. Unfortunately, the porter stacked the Da Vinci against a leaking radiator. By the next morning, the picture had been thoroughly steamed. Only the faintest outline of the sketch remained.

In a panic, the porter summoned the president of the Royal Academy, who summoned the keeper of pictures, who summoned the chief restorer of the National Gallery, who announced that the picture couldn’t be restored, it could only be redrawn. At which point, they sent for star student Eric Hebborn, who wielded his chalk and charcoal in a flawless recreation of the lost original.

Or so Hebborn claimed, noting that it seemed curious that the Royal Academy sold the drawing soon afterwards, and spent some of the money on . . . upgrading its radiators. It was an astonishing story and very hard to check. The drawing was indeed sold to the National Gallery. But one day, in 1987, a man walked into the National Gallery wearing a long coat, paused in front of the drawing, pulled out a shotgun and blasted the artwork. The man, who wanted to make a statement about the social conditions in Britain, was arrested and later confined to an asylum. The National Gallery had the drawing restored, with tiny fragments of paper being painstakingly glued back together. That restoration would have concealed Hebborn’s handiwork, if Hebborn ever touched the cartoon. So — did he?

When the jaw-dropping story containing Hebborn’s claims was published, the Royal Academy responded that they were “astonished that anyone could fall for such an unlikely story from someone who made a living out of being a fake”.

One thing is true for sure, Hebborn made his living out of being a fake. After he graduated, he moved to Rome and worked as both an art dealer and what one might euphemistically call a picture restorer. He’d clean old pictures and retouch them and, before long, he was doing much more than that. Add a balloon, floating over an undistinguished landscape, and you had what appeared to be an important record of the early steps of aviation — and a much more expensive painting. Or maybe the fashion was for poppies. They were easily added and made to look as though they had been part of the original. Or, as Hebborn himself said, “a cat added to the foreground guaranteed the sale of the dullest landscape.” Soon enough, Hebborn was being asked to “restore” blank sheets of paper, or to “find” lost preparatory sketches by old masters. He would pass these discoveries to other dealers, some of whom knew what he was up to and others who did not. He claimed to have created more than a thousand forgeries. Some art historians think he made a lot more than that.

Here’s another hard-to-check Hebborn story. A few years after moving to Rome, he acquired a drawing of Roman ruins, supposedly sketched by the Flemish master Jan Brueghel the Elder sometime around the year 1600. It was good value, just £40 in 1963 (nearly £1,000 today). But was it really by Brueghel? The frame said so, with the imprimatur of a respected London dealer. It had Brueghel’s signature on it. The paper was old. Hebborn knew a lot about paper. As a dealer in old drawings, he had to. There were so many fakes around, after all.

But the drawing itself didn’t seem right to Hebborn. It was too careful, the lines drawn too slowly. “This is not a Brueghel,” Hebborn said to himself. “This is a copy.” He supposed that some forgotten engraver, three centuries or more ago, had painstakingly copied Brueghel’s original as the first step in making an engraving. The original itself had been lost. Hebborn decided to find it again, in a manner of speaking.

Hebborn turned over the frame and steamed off the stiff sheet of brown backing paper, setting it to one side. Then he teased out the rusty nails, setting those aside, too. Each one would eventually nestle back in precisely the right hole. Finally, he taped the old drawing to the side of his drawing board.

He prepared his materials: a blank page cut out of a 16th‑century book, carefully treated with a starch solution to control its absorbency; an 18th-century paintbox, many of the paints still perfectly good; a glass of brandy to steady the nerves. And, moving precisely but swiftly, he made his own “more vigorous” copy. Very nice. It looked more like a Brueghel now. He sold it on again, and it ended up in the Metropolitan Museum of Art in New York.

Having admired his handiwork, Hebborn recalled, he did something that “I rather regret . . . I tore up the thing I copied . . . I flushed it down the lavatory. I rather wish I hadn’t because it would be nice now to compare, you know . . . perhaps I destroyed an original Brueghel. I hope not.”

In any case, Hebborn went on, the Metropolitan Museum seemed to be happy with his copy. Yet when he announced this forgery to the world in his 1991 autobiography, Drawn to Trouble, the Met was not happy. It told The New York Times, “We don’t believe it’s a forgery, and we believe that the story told by Mr. Hebborn in this book is not true.”

Which were the fakes: the yarn about the Da Vinci or the drawing? The Brueghel sketch or the story of its provenance? Deciding what’s true and what isn’t is something we’re quickly having to get used to doing. I’m not completely confident that we’re up to the challenge.

*

The journalist Samantha Cole introduced the world to a new technology with the following sentence: “There’s a video of Gal Gadot having sex with her stepbrother on the internet.” The video was, of course, a deepfake, swapping Gadot’s face on to a porn performer’s body, created using a particular form of artificial intelligence called deep learning.

This was 2017, the year after “post-truth” was named Word of the Year by Oxford Dictionaries and a fertile time for anxiety about people finding new ways to lie to us. What if someone created a deepfake of Donald Trump declaring war on China?

In the following years, such fears seemed overblown. A few deepfakes made a splash: one appearing to show Ukraine’s President Volodymyr Zelenskyy belly-dancing did the rounds earlier this month. In 2018, the Flemish Socialist party posted a fake video appearing to show Donald Trump declaring, “As you know I had the balls to withdraw from the Paris climate agreement. And so should you.” Then there was the audio deepfake released two days before the Slovakian election last September. This was widely shared online and seemed to portray the opposition leader conniving to rig the vote. Late polls had showed him ahead, but he lost the election to a pro-Russian rival.

Despite such warning shots, deepfake technology is still mostly used for non-consensual pornography. Part of the reason is that creating deepfakes is hard — there are easier ways to lie with video. You could, for example, misdescribe an existing video. In December 2023, videos circulated on social media claiming to show Hamas executing people by throwing them off the roof of a building in Gaza. The videos are genuine, but the atrocity took place in Iraq in 2015 and the murderers were Islamic State, not Hamas. It’s common for real videos and pictures to be shared online with deceptive labels.

Other simple tricks achieve much the same effect. Let’s say it’s the 2016 election and you want to create a joke video of Dwayne “The Rock” Johnson singing an abusive song to presidential candidate Hillary Clinton, and her reaction to it. No big deal, just for the laughs. It’s easy. We have footage of The Rock singing an abusive song about another wrestler. We have footage of Hillary Clinton looking a bit awkward. Splice them together — as one troll did — and you have a crude prank depicting a campaign-trail event that never happened. A shallowfake, if you like.

In his book about deepfakes, Trust No One, the journalist Michael Grothaus interviewed the troll in question, who realised something unsettling once his shallowfake video went viral on Facebook. The comments rolled in; people had missed the joke. “Wait,” the troll told Grothaus. “These dumb shits think this is real?”

They did indeed. They — we — are busy. We’re distracted. We instinctively feel that some stuff is too good to check. And so we’ll accept lies that really should give us pause.

The Slovakian case should be a warning. With high-stakes elections taking place across the world this year, the experts I’ve spoken to are concerned that it’s only a matter of time before a clever, well-timed piece of disinformation has a calamitous impact, deciding the result of a close-run election. It might not involve a deepfake or another AI-generated visual image. Then again, it might. The technology is getting better; it is already straightforward to create a convincing deepfake, or to use generative AI to fabricate a photorealistic scene that never happened, barely more difficult than editing or re-describing an existing video. And visual images have always been more eye-catching and emotionally compelling than text. So have our fears about deepfakes really been misguided, or have they merely been premature?

*

Some AI experts have waved away concerns about deepfakes, reassuring us that we’ll get smarter once we get used to them. Professor Ira Kemelmacher-Shlizerman, a computer scientist at Google and the University of Washington, told the Radiolab podcast in 2019, “If people know that such technology exists, they will be more sceptical.” She explained, “If people know that fake news exists, if they know that fake text exists, fake videos exist, fake photos exist, then everyone is more sceptical in what they read and see.”

But perhaps we’ve already taken scepticism too far. Consider a new analysis in the Journal of Experimental Psychology from the psychologists Ariana Modirrousta-Galian and Philip Higham. They look at games such as Bad News and Go Viral!, which are designed by researchers at the University of Cambridge to help “inoculate” people against fake news. And they work, sort of. After playing these games, experimental subjects are indeed more likely to flag fake news as fake news. Unfortunately, they are also more likely to flag genuine news stories as fake news. Their ability to discriminate between true and false does not improve. Instead, they become more cynical about everything.

What is the deeper problem, people falling for malicious nonsense, or people refusing to believe carefully reported journalism? I’m not sure. But it’s certainly possible that universal cynicism is a cure that’s worse than the disease. Deepfakes, like all fakes, raise the possibility that people will mistake a lie for the truth, but they also create space for us to mistake the truth for a lie.

Just think about the notorious tape from Access Hollywood, in which Donald Trump boasted of sexually assaulting women. It was released in October 2016 and caused a political explosion. Deepfake audio wasn’t part of the conversation then, but if it had been, Trump could easily just have said, “That’s not my voice on the tape.” The mere fact that deepfakes might exist creates a completely new kind of deniability.

A study by researchers at Purdue University examined the evidence for this sort of risk. They surveyed 15,000 Americans, asking them how plausible they would find a variety of excuses for political scandals. They found that when the scandal was reported as text, politicians could get themselves off the hook by shouting “fake news”. People would believe the scandal never happened, that the evidence itself was faked. What is the deeper problem, people falling for malicious nonsense, or people refusing to believe carefully reported journalism?

When Purdue conducted their study, in 2020, that wasn’t yet true for video: if videotape existed of a politician doing or saying something awful, they couldn’t expect to exonerate themselves by protesting “that video is fake”. But I wonder how long video evidence will continue to be regarded as trustworthy and how soon politicians will be able to shrug off damning video evidence of misbehaviour by falsely claiming the video itself was phoney. Last year, in a lawsuit over the death of a man using Tesla’s self-driving capabilities, Elon Musk’s lawyers questioned a YouTube video in which Musk was talking about those capabilities. It might be a deepfake, they said. (The judge was unimpressed.)

If we’re shown enough faked videos of atrocities, or of political gaffes, we might start to dismiss real videos of atrocities and real videos of political gaffes, too. It’s good to be sceptical, but if we are too sceptical then even the most straightforward truths are up for debate. That may explain why, five years after Samantha Cole explained deepfake pornography to her astonished readers, she was writing an article with the stupefying title, “Is Joe Biden Dead, Replaced by 10 Different Deepfake Body Doubles? An Investigation”.

It might seem a long road from “that woman waving a sex toy around really isn’t Gal Gadot” to “that man giving a speech in the White House really is Joe Biden”. But it’s a road that Eric Hebborn would have understood very well. Maybe that Brueghel really is a Brueghel. Maybe the Da Vinci is just a Da Vinci.

If Hebborn was telling the truth about replacing that Brueghel with his own drawing, why did he do it? To amuse himself and burnish his reputation as a master draughtsman when he confessed. If he lied about it, why? Also to amuse himself and burnish his reputation as a master draughtsman. The writer and artist Jonathon Keats, in his book Forged, said of Hebborn, “faking his fakery may have been his master stroke, since no amount of sleuthing could detect forgeries that never existed”.

*

So which is the fake, the Met’s drawing by Jan Brueghel, or Eric Hebborn’s story about having faked it? Hebborn’s answer was, who cares? In his sensational autobiography, he argues that there’s no such thing as a fake work of art, just a mistaken attribution. “I don’t like the word fake applied to perfectly genuine drawings,” he explained in a BBC documentary, released the same year as his autobiography. Hebborn cheekily blamed unscrupulous dealers for misattributing his work and incompetent experts for missing the truth.

Maybe it was a real Brueghel that he flushed down the loo. Maybe it was a copy. Or maybe Hebborn made up the entire story to amuse himself by trolling the Met. Maybe the picture in the Met’s collection really was painted by Jan Brueghel the Elder, as originally thought, or Jan Brueghel the Younger, as later decided, or the current attribution, “Circle of Jan Brueghel”. It doesn’t matter, said Hebborn. It’s a beautiful drawing, whoever drew it. Enjoy it for what it is and don’t worry about what it isn’t. Art is about creating beautiful things, isn’t it? And that is what Hebborn did.

The BBC interviewer challenged him at one point. If he was just making beautiful drawings rather than fakes, why did he put the stamps of famous historical art collectors on the pictures? “Well they look nice, for one thing,” shrugged Hebborn. But weren’t they designed to convince the experts that the pictures were genuine? “I don’t think so. If they were experts, they would have seen that they were false collectors’ marks,” Hebborn replied. “Some of them were done freehand, in watercolour, rather than being stamped. I did them in a very amateurish way. They shouldn’t have been fooled at all.” Or as a later faker said, “Wait, these dumb shits think this is real?”

In 2016, two analysts at the think-tank Rand Corporation described the evolving propaganda strategy of the Russian government. The conventional wisdom on propaganda messages is that they should be true when possible and, in any case, they should be believable and consistent. But the emerging approach from Russia was quite different. Russian media channels, websites and social media accounts for hire would post anything. It didn’t matter whether it was true. It didn’t matter whether it was believable. What mattered was speed, relevance and volume. The analysts called this strategy “the fire hose of falsehood”. It’s a nickname that would have suited Hebborn perfectly.

There are several reasons why the fire hose of falsehood can work, despite the fact that the individual lies are not especially plausible. Fast, relevant spin from lots of different sources, all pushing the same basic perspective, can create an overall impression that feels quite believable. And the fire hose of falsehood can also deliver results even if nobody believes a word of it. When it works, it floods social media (and sometimes the conventional media too) with distractions, toxicity, shitposting and obvious nonsense. The result may well be to turn news consumers off completely. Why would you waste effort trying to understand the world when everyone seems to be lying about it all the time?

In a press conference late in 2023, Vladimir Putin fielded a videocall from a deepfaked copy of himself. “Do you have a lot of doubles?” the software doppelganger asked. Real Putin calmly replied that only one person could speak with the voice of Putin, Putin himself. Under the circumstances, that was absurd. So why arrange such a stunt? To create a moment of levity in a country at war, perhaps. But there’s also a subtext: you can’t believe your eyes; you can’t believe your ears; you can’t believe anything.

This isn’t an entirely new idea. In his 2023 book A History of Fake Things on The Internet, Walter J Scheirer points out that many manipulated photographs are supposed to look manipulated. After Mao Zedong died in 1976, a photograph was taken of a memorial event with a line of Chinese leaders, heads bowed in respect. The official photograph of the event however, contains obvious gaps. Mao’s close acolytes, known as The Gang of Four, were expunged. You’re supposed to notice. You’re supposed to understand that history, truth and the evidence of your own eyes — that none of these things is solid any more.

*

Beneath the smile and the winking stories he tells to the BBC producer, Hebborn seems vulnerable on camera. He speaks softly, slurring his esses. Maybe he’s had a bit too much to drink. He certainly drank excessively; his friends worried about that. And all his tricks and adventures start to seem less fun as Hebborn quietly tells the story of his life. That his overworked, stressed mother used to take her “revenge” out on him.

At school, he would make charcoal for drawing out of matches and was accused of arson by the headmaster, who caned him. So the eight-year-old Eric decided he’d do the deed for which he’d been punished and set fire to the school. “I got rather frightened and I thought I’d better tell the headmaster,” Hebborn said. But in his panic he couldn’t find the right words. He was sent to a youth detention centre at the age of eight.

It’s hard not to feel sympathy for the old rogue. And there is something very Hebborn-esque about being punished first, then committing the crime after the fact. Justice turned upside down. Truth turned back to front. History turned inside out. That’s Eric Hebborn and, perhaps, that’s the computer-generated world that is coming for us.

What does that world hold in store? As the UK, US and many other democracies go to the polls in 2024, it is worth pondering some of the more uncomfortable scenarios. Disinformation is now cheaper than ever. We might see authentic-seeming fake audio and video, generated automatically and at enormous scale. It might be targeted precisely at each person based on their web-browsing habits, rather than published where everyone can see and check. We might see emotionally compelling, individualised propaganda distributed so widely that no fact-checker could possibly debunk it. We have already seen the campaigns of established politicians, such as the former Republican US presidential candidate Ron DeSantis, use deepfaked attack ads.

And whether or not any of the faked material sticks, we can certainly expect real audio, real video and real reporting to be routinely dismissed as fake, if it even gets a look-in amid the fire hose of falsehood. The technology is coming fast and there are plenty of unscrupulous actors prepared to use it.

*

In 1995, Eric Hebborn followed up his autobiography with a book in Italian, a scandalous how-to guide later titled The Art Forger’s Handbook in English. A few weeks later, he was found lying in the street near his apartment in Rome. The medics thought at first that he had drunk too much, fallen and hit his head. But not for the first time in Hebborn’s life, the professionals were confused by what they were looking at. His condition was more serious, and less of an accident, than they realised.

Hebborn died on January 11 1996, a couple of days after being taken into hospital. Soon, hints of what had really happened started to emerge. The autopsy concluded that Hebborn had been killed not by a drunken fall, but by a hammer blow to the skull. His apartment had been ransacked while he was lying in the street. There was no shortage of suspects for the murder. There were people to whom he sold fakes, people whose real work he claimed he’d faked, dealers he publicly accused of knowingly buying fakes and selling them at a huge mark-up. More recent reporting suggests that the mafia were paying him to fake art, too. The police didn’t bother to investigate. Where would they even begin? Hebborn had far too many people who would have been happy to see him dead.

In Forged, Jonathon Keats invites us to think of Hebborn less as a faker and more as a man who created the work that the old masters were no longer available to make. It’s a heart-warming idea and one that would have pleased Hebborn: that we can create old works of art anew, and art history can expand like an accordion to accommodate them.

But although some might indulge that idea for artworks, I don’t feel comfortable in a world in which we can create alternative facts and squeeze them in next to the real ones; in a world where there’s a photograph of Mao’s memorial with the Gang of Four present and the same photograph with them absent; in a world where Vladimir Putin has conversations with himself and where people aren’t sure if that’s Joe Biden or 10 deepfakes of him.

And even in the world of art, should we welcome all those Hebborns? I fear that we lose more than we gain when we start to lose confidence in the Da Vincis and the Brueghels.

After Hebborn claimed to have created a better Brueghel and flushed the old version down the toilet, his former boyfriend and business partner published his own memoirs saying that the story about the Brueghel drawing wasn’t true. The story about setting fire to his school has been disputed, too. Once there are enough lies around, it’s easy to start doubting . . . well, everything.

Hebborn once told the great art journalist Geraldine Norman, “I like to spread a little confusion.” He succeeded. And he became so notorious that people are now starting to value the Hebborn forgeries in their own right. The only trouble is, wrote one art dealer, “Some of the drawings which were being offered for sale by [Hebborn’s] associates and former friends had a strange feel to them, an unusually lifeless quality which did not seem true of Eric’s work at all. I had misgivings about the drawings and declined to purchase them.”

Genuine fakes? Fakes of fakes? Maybe they weren’t fakes at all, just original old masters having an off day.

Two years after Hebborn was murdered, an anonymous phone call to the Courtauld Institute in London warned that 11 named artworks in the institute’s collection were fakes by Hebborn. We still don’t know who made the phone call, or why.

I recently visited the Courtauld to look at some of the fakes, the wrongly suspected fakes, and the works suspended in limbo. It was a fascinating but unsettling experience. There are several pictures in the Courtauld’s collection that they are fairly sure were by Hebborn; some of which he confessed to himself, not that that was ever a guarantee of anything.

There’s a Van Dyck that is under suspicion, but there’s nothing provably wrong with the picture. Other pictures that were anonymously accused of being Hebborn fakes definitely aren’t. There’s a Guardi sketch that was photographed in the 1920s, before Hebborn was born (or did he copy it and flush the original down the loo?) A Tiepolo drawing is now regarded as genuine. Whoever that anonymous whistleblower was, and whatever their reasons, they weren’t infallible. And then there’s a Michelangelo drawing. Fake? Real? We just don’t know. It’s a beautiful work by — perhaps — one of the greatest artists who ever lived. And yet it seems doomed to have an asterisk beside it for ever.

I left the Courtauld Institute, and strolled towards the National Gallery, just down the road, where I could see Leonardo da Vinci’s masterpiece, the Burlington House Cartoon. This is the work that Hebborn claimed he’d redrawn, after a drunk porter left it too close to a radiator, the work that was later blasted with a shotgun.

I couldn’t help wondering: if that piece really is a Da Vinci, then who damaged it more, the man with the shotgun or Eric Hebborn and his story?

Written for and first published in the Financial Times on 25 January 2024.

My first children’s book, The Truth Detective is now available (not US or Canada yet – sorry).

I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.

2 likes ·   •  0 comments  •  flag
Share on Twitter
Published on February 21, 2024 08:39