Tim Harford's Blog, page 15
June 27, 2024
Shakespeare’s forgotten legacy: hyperbolic numbers
There is a theory that Shakespeare was an accountant. How else to explain the detailed use of bookkeeping metaphors in his writing? “We shall not spend a large expense of time/ Before we reckon with your several loves,” declares Malcolm in Macbeth, “And make us even with you.”
The jailer in Cymbeline compares the hangman’s noose with an accountant reckoning the credits and debits of the condemned man’s life. And The Comedy of Errors refers to a debt as a “thousand marks”, a unit only used by book-keepers in Elizabethan England.
Yet Shakespeare seems to have been rather loose with his economics. Rob Eastaway’s new Shakespearean mathematical miscellany, Much Ado About Numbers, tells us that Shakespeare put Dutch guilders in Anatolia in The Comedy of Errors, situated Italian chequins in Phoenicia in Pericles, described Portuguese crusadoes in Venice in Othello and had Julius Caesar’s will bequeathing Greek drachmas to every Roman. There is something to be learnt from Shakespeare’s attitude to numbers (besides that he’s a poor guide to foreign exchange markets).
As Eastaway explains, Shakespeare’s works are richly adorned with numbers. Hamlet’s “thousand natural shocks/ That flesh is heir to” is just one of more than 300 instances of the word “thousand” in Shakespeare’s work. We are not meant to hear Hamlet’s words as a precise count, of course. By “thousand” he refers to the myriad of misfortunes a person can experience in a lifetime. And by “myriad” I mean “a lot”, rather than its original meaning in classical Greek, “ten thousand”. Large numbers have a way of blurring like that, especially as Shakespeare was writing for an audience who would rarely have any literal use for a thousand. Few people would earn a thousand pounds or travel a thousand miles, although the Globe Theatre might have held three thousand paying customers.
In Timon of Athens, Timon tries to borrow “fifty-five hundred talents” from his friend Lucilius. That’s 120 tonnes of silver, Eastaway tells us. No Elizabethan audience would have grasped what fifty-five hundred talents really meant. Nor, without Eastaway doing our homework for us, do we. (It’s more than $100mn.) But we all get the point: it’s a ludicrous request.
We still share Shakespeare’s love for hyperbolic numbers, but we also need to use big numbers accurately. I’m old enough to remember confusion as to the definition of the word “billion”. These days, it means a thousand million but, in various times and places, it has meant a million million. City of London traders used to use “yard”, short for the French milliard, to refer to a thousand million. That was useful. Both yard and milliard sound quite different from million on a crackly phone line. Billion does not.
Crackly phone line or not, it is common for millions and billions to be confused. Too often we lump them into the same mental category: big numbers. But there’s big, and there’s big. A million seconds is less than 12 days, while a billion seconds is nearly 32 years.
The confusion is regularly exploited by politicians. No UK Budget speech is complete without a loud boast that the government is spending a few million pounds on some worthy scheme, while the grinding progress of inflation will silently squash budgets by billions in real terms. The quiet billions are real money, while the noisy millions are a rounding error. To the unsuspecting voter, they sound much the same.
We can help ourselves to navigate the maze of numbers and language by making helpful comparisons. The most straightforward is to figure out what that multibillion-pound tax increase actually means per person. It’s always useful to compare spending this year with spending last year, or a decade ago, or with spending in a neighbouring country. Comparing the unfamiliar with the familiar draws meaning out of a bewildering landscape of billions and trillions.
When we merely wish to convey a poetic sense of scale, as Shakespeare often did, we have access to a linguistic technology the Bard did not possess: terms such as “squillion” or “jillion” or “zillion”. These, my friends, are the indefinite hyperbolic numerals. According to Helen Zaltzman’s The Allusionist podcast, such terms emerged in the US in the late 19th and early 20th centuries. Jillion was common in Texas. Zillion was a staple of Harlem’s African-American literary magazines. In the late 1930s, the writer Damon Runyon brought both words to a wider audience. The joy of the jillion — or, if you really want to punch it up, the bajillion — is that while it may be imprecise, it is clear. The word means “a huge number, but let’s not fuss about exactly how huge”.
Whenever Shakespeare used large numbers, it was clear enough that he was speaking figuratively. Eastaway documents “twenty thousand kisses” in Henry VI, Part 2. Hamlet’s love for Ophelia is more than that of “forty thousand brothers”. In A Midsummer Night’s Dream, Cupid’s arrow pierces “a hundred thousand hearts”. And Shakespeare’s biggest number of all? Friar Laurence assures Romeo that if he escapes to Mantua, when he returns he will be greeted with “twenty hundred thousand times more joy”. That’s two million.
Alas, that is not how the story ends – a reminder that numbers, no matter how hyperbolic or how precise, need not necessarily tell us the truth.
Written for and first published in the Financial Times on 31 May 2024.
Loyal readers might enjoy the book that started it all, The Undercover Economist.
I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.
June 20, 2024
Cautionary Tales – Adidas v Puma: A Battle of Boots and Brothers
Adi and Rudi Dassler made sports shoes together – until a feud erupted between them. They set up competing companies, Adidas and Puma, and their bitter rivalry divided the sporting world, their family and even the inhabitants of their home town.
The Dassler clan turned bickering into an art form – even drawing the likes of soccer legend Pele into their dispute. But did the brilliant fires of hatred produce two world-class companies, or was it a needless distraction from the Dasslers’ love for their craft?
Further reading
We learned about the history of Puma and Adidas from the books Sneaker Wars, by Barbara Smit, and The Puma Story by Rolf-Herbert Peters. Additional details on the feud between the brothers and their companies, and on life in Herzogenaurach, came from articles in outlets such as Business Insider, DW, The Guardian, CNBC, Worldcrunch and The Wall Street Journal.
For a review of the evidence linking purpose and productivity at work, see Igniting individual purpose in times of crisis, published in McKinsey Quarterly. The survey of workers including the stonemason is detailed in What Makes Work Meaningful — Or Meaningless, published in the MIT Sloan Management Review.
The detours on memory lane
Do you remember where you were when you heard that planes had struck the World Trade Center? That the Challenger shuttle had exploded? Or that Nelson Mandela had been released?
Your memories may be different from mine, but not as different as Fiona Broome’s. I remember watching the live TV footage of Nelson Mandela walking to freedom after 27 years in captivity, while Broome, an author and paranormal researcher, remembers Nelson Mandela dying in prison in the 1980s.
When Broome discovered that she was not the only person to remember an alternative version of events, she started a website about what she dubbed “the Mandela Effect”. On it, she collected shared memories that seemed to contradict the historical record. (The site is no longer online but, never fear, Broome has published a 15-volume anthology of these curious recollections.)
Mandela, of course, did not die in prison. On a recent trip to South Africa, I visited Robben Island, where he and many others were incarcerated in harsh conditions, to speak to former prisoners and former prison guards, and to wander around a city emblazoned with images of the smiling, genial, elderly statesman. How could it be that anyone remembers differently?
The truth is that our memories are less reliable than we tend to think. The cognitive psychologist Ulric Neisser vividly remembered where he was when he heard that the Japanese had launched a surprise attack on Pearl Harbor on December 7 1941. He was listening to a baseball game on the radio when the broadcast was interrupted by the breaking news, and he rushed upstairs to tell his mother. Only later did Neisser realise that his memory, no matter how vivid, must be wrong. There are no radio broadcasts of baseball in December.
On January 28 1986, the Challenger space shuttle exploded shortly after launch; a spectacular and highly memorable tragedy. The morning after, Neisser and his colleague Nicole Harsch asked a group of students to write down an account of how they learnt the news. A few years later, Neisser and Harsch went back to the same people and made the same requests. The memories were complete, vivid and, for a substantial minority of people, completely different from what they had written down a few hours after the event.
What’s stunning about these results is not that we forget. It’s that we remember, clearly, in detail and with great confidence, things that simply did not happen.
Other researchers have gone further. In the 1990s, the psychologist Elizabeth Loftus conducted a study that has become famous as the “Lost in the Mall” experiment. She recruited subjects and persuaded older members of each subject’s family to write a paragraph about each of four incidents in the subject’s childhood. The subjects were asked to read these short memory-prompts and then to elaborate or, if they didn’t remember the episode, to say so.
The trick in Loftus’s experiment was that one of the four incidents described was fictional. Remember that time you were lost in the mall? Sure, said some (but not all) of the subjects, serving up a string of compelling details, all of which they thought they remembered.
Loftus’s work has often been used in criminal trials, and this is a sensitive topic. For some critics, it is just one more excuse to dismiss the testimony of people who have suffered abuse. So it’s worth being clear that just because some memories are false, doesn’t mean they all are. Seventy five per cent of the subjects in Loftus’s experiments simply said that they didn’t remember being lost in a mall. The point is not that our memories always let us down, but that when they do, neither their vividness nor our own confidence is a good guide to what really happened.
We should hardly be surprised that some people have memories of things that never happened. It is easy to see how some people might have formed the vague impression that Mandela died in prison: the activist Steve Biko was killed in the custody of South African police and there were worldwide protests throughout the 1980s against the evils of apartheid. Given what we know about how memory works, a vague impression can be enough to prompt clear and specific memories of non-existent events.
Broome, for her part, insists that people should not rush to the “simplistic” explanation that our memories play tricks on us, and should instead explore the “wealth of evidence . . . that may point to parallel realities and Many Interacting Worlds”.
Fine. We are all entitled to our own beliefs. Some people believe that our memories can deceive us. Some people believe that there is an alternate timeline in which Mandela died in prison, and that people, or memories, are slipping from one timeline to another.
But there is more at stake here than a theory of multiverses or a grasp of the history of South Africa. We all have subjective feelings about our beliefs, and there is no reliable connection between feeling confident about a belief, and that belief being true. Mandela multiverse believers have an unusual view of the world, but there is nothing unusual about feeling certain yet being wrong. We’ve all done that.
As Kathryn Schulz, author of Being Wrong, reminds us, we are all familiar with the lurching realisation that we were wrong. But until the moment of revelation there is no distinctive mental state that feels like being wrong. Being wrong feels exactly like being right.
Written for and first published in the Financial Times on 24 May 2024.
Loyal readers might enjoy the book that started it all, The Undercover Economist.
I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.
June 15, 2024
It’s Independent Bookshop Week!
I suggest that this is a particularly good week to visit your local independent bookshop and buy some lovely books.
If, however, you are not in easy reach of an independent bookshop, Bookshop is the website which donates 30% of sales revenue to an independent bookshop that you nominate, or 10% of revenue to a pool that supports independent bookshops in general. (They also pay affiliate fees on links to affiliates – including me.) In celebration of Independent Bookshop week, they’re also offering free shipping on orders this coming weekend, 22/23 June.
Anyway: I just wanted to alert you to my list of favourite books that will help you think more clearly about numbers, from Caroline Criado Perez’s barnstorming Invisible Women (a book that really changed the way I think) to David Spiegelhalter’s magisterial The Art of Statistics. And more – they’re all here. Enjoy!
June 13, 2024
There is no need to lose our minds over the Jevons paradox
A few years ago, two San Francisco doctors, Mary Mercer and Christopher Peabody, persuaded the busy hospital where they worked to conduct an experiment. They replaced their clunky and inflexible old pagers with a cheaper, more flexible and more powerful system. It’s called WhatsApp.
As the podcast Planet Money reported last year, the pilot was not a success. The chief reason? Messaging became too easy. To interrupt a busy consultant by paging them to demand a return phone call was a serious step, taken with care. But with WhatsApp, why not snap a photograph or even a video message and zip it over just to get a spot of advice? Doctors were soon swamped.
To students of energy economics, this story sounds awfully familiar. It’s the Jevons paradox. William Stanley Jevons was born in 1835 in Liverpool, in a country made rich by a coal-fuelled industrial revolution. He was about to turn 30 when he published the book that made his name as an economist, The Coal Question. Jevons warned that Britain’s coal would soon run out (an eye-catching warning that turned out to be wrong) but, more intriguingly, he warned that energy efficiency was no solution.
“It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption,” he explained. “The very contrary is the truth.”
Imagine developing a more efficient blast furnace, one that would produce more iron for less coal. These more economical furnaces would proliferate. Jevons argued that more iron would be produced, which was a good thing, but the consumption of coal itself would not decline.
Is this right? In a mild form, Jevons’ analysis is certainly correct. When an energy-consuming technology becomes more efficient, we’ll use more of it. Consider light. In the late 1700s, President George Washington calculated that burning a single candle for five hours a night all year would cost him £8. Relative to incomes of the time, that is about $1,000 in today’s money. These fine spermaceti candles were pricey enough to leave even a rich man such as Washington carefully conserving them.
Modern lighting is far more economical and therefore used with abandon. LEDs are many times brighter than candles, and we use much more light and save much less energy than we otherwise could have done.
The stronger form of Jevons’ warning is the full Jevons paradox, when we use so much more of the more efficient technology that we don’t reduce energy consumption at all; in fact, we increase it. David Owen, in a piece for The New Yorker, observed that the refrigeration technology that was once used to cool a cupboard’s worth of food is now used to cool entire buildings.
Ed Conway, author of 2023’s Material World, points to the Sphere in Las Vegas, which has 1.2mn LEDs on its surface. I’m not sure what the lighting bill is for that, but I suspect it would pay for a candle or two.
The stronger Jevons paradox tugs the rug from under the one certainty we have in energy policy, which is that nobody — from Extinction Rebellion to the “Drill, baby, drill!” wing of the Republican party — could possibly be stupid enough to object to cars, houses and appliances that get the same result for less energy and less money.
Has Jevons really ruined all this? No. Owen, normally a wise writer, seems to view the Jevons paradox as something utterly inescapable like the second law of thermodynamics. For example, if an efficient car saves a driver thousands of dollars in fuel costs, Owen explains, “the environment is unlikely to come out ahead, as those dollars will inevitably be spent on goods or activities that involve fuel consumption”.
Yet the environment is all but certain to come out ahead, as there are few more environmentally damaging ways to spend a thousand dollars than to burn a thousand dollars of gasoline. The money could be spent on a thousand dollars’ worth of coal, I suppose, but it could also be spent on a thousand dollars’ worth of tree saplings or yoga lessons.
Thankfully we can refute the strong paradox. In my lifetime, energy consumption per person in the UK has fallen by one-third, while carbon dioxide emissions per person have fallen by nearly 60 per cent. As Hannah Ritchie explains in her book Not the End of the World, while some of this fall reflects the offshoring of manufacturing to other countries, most of it does not. Energy efficiency really has reduced energy consumption.
Jevons is worth taking seriously. When we regulate to require energy efficiency, consumption will fall less than pure arithmetic suggests. So energy policy should always be considering other instruments — including the old favourite of economists, a carbon tax, which is a Jevons-proof way to discourage the burning of fossil fuels.
But let’s not let Jevons drag us into despair. We really are moving towards a cleaner world, and energy efficiency has a big part to play in that move.
One place where the Jevons paradox seems inescapable? My inbox. It is so much more efficient to reply to a digital message than to a handwritten letter, yet somehow I am drowning in emails.
Written for and first published in the Financial Times on 17 May 2024.
Loyal readers might enjoy the book that started it all, The Undercover Economist.
I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.
June 6, 2024
Cautionary Tales – The Revenge of the Whales
In the middle of the Pacific Ocean, in 1819, Owen Chase is standing on a slowly sinking ship. It’s just been headbutted by an 85 foot whale. It’s taking in water. And now the creature is coming back for another go. This is a whaling ship, and Chase is convinced that he observes “fury and vengeance” in the animal.
In 2010, an orca is performing for a crowd at SeaWorld – but he misses his mark and so he doesn’t get his reward. That’s when he grabs hold of his trainer, Dawn Brancheau, and pulls her under water. By the time he’s finished, her savaged body has multiple fractures and dislocations. And her scalp has been ripped off.
To some observers, these whales were surely out for revenge. But how much is what we think we understand about the natural world shaped by human guilt?
Further reading
The documentary Blackfish is available on Netflix. The critique on the website awesomeocean.com can be found here. We read about the aftermath of Blackfish in an article by Laura Thomas-Walters and Diogo Veríssimo on The Conversation.
Owen Chase’s 1821 Narrative of the Most Extraordinary and Distressing Shipwreck of the Whale-Ship Essex can be read on Project Gutenberg. For context about how it inspired Moby Dick, see this article in the Smithsonian magazine. We learned more about the history of whaling on the US National Parks Service website.
Reportage on orcas attacking yachts came from sources including The Guardian, The Atlantic and Yacht.de.
Frans de Waal’s article, Anthropomorphism and Anthropodenial: Consistency in Our Thinking about Humans and Other Animals, was published in the journal Philosophical Topics in 1999.
When your smartphone tries to be too smart
Back in the 1980s, the design expert Donald Norman was chatting to a colleague when his office phone rang. He finished his sentence before reaching for the phone, but that delay was a mistake. The phone stopped ringing and, instead, his secretary’s phone started ringing on a desk nearby. The call had been automatically re-routed. Alas, it was 6pm, and the secretary had gone home. Norman hurried over to pick up the second phone, only to find it stopped too.
“Ah, it’s being transferred to another phone,” he thought. Indeed, a third phone in the office across the hall started to sound. As he stepped over, the phone went silent. A fourth phone down the hall started ringing. Was the call doomed to stagger between phones like a drunkard between lampposts? Or had a completely different call coincidentally come in?
Norman tells the story in The Design of Everyday Things, the opening chapter of which is a collection of psychopathic objects from bewildering telephone systems to rows of glass doors in building lobbies that simply offer no clue whether to push or pull or even where the hinges are.
“Pretty doors,” jokes Norman. “Elegant. Probably won a design award.”
Reading Norman’s book more than three decades after its publication in 1988, it is striking how much the surface of things has changed. We no longer have to deal with incomprehensible telephone systems or VHS recorders. Good design is not a niche luxury now, but viewed as an essential part of business. The world has scrambled to imitate the success of Apple, one of the world’s most valuable and admired companies, which is built on good design: beautiful, easy-to-use products.
And yet I wonder. The aviation safety expert Earl Wiener is famous for “Wiener’s Laws”, which include “whenever you solve a problem you usually create one”. The truth is that modern devices may seem simple and easy to use, whereas they are in fact fantastically complicated. Those complications are elegantly obscured until something goes wrong.
I thought of Wiener and Norman recently as I arrived in Amsterdam, equipped with a Eurostar ticket barcode on my phone. Problem: the Eurostar exit barrier in Amsterdam is also the ticket gate for a variety of metropolitan rail services. As I tried to scan the barcode, the ticket barrier perceived my phone as a wannabe contactless credit card, and charged me for a local rail journey instead.
This is the logical result when two paths of technological improvement collide. Path one: replace a fussy magnetic strip on a paper train ticket with a more flexible barcode. Then display the barcode on a phone. Path two: replace a paper travelcard with a contactless travelcard. Then replace the travelcard with a more flexible contactless credit card. Then add the contactless credit card function to a phone. Problem solved, and, as Wiener declared, when you solve a problem you create another one.
Not that I’m complaining. Paying for stuff with a phone is convenient and, I am told, very secure. Travel is vastly easier, waving my phone to buy anything from tram journeys in Amsterdam to smørrebrød in Copenhagen, all at fair exchange rates.
And yet the point remains: technological improvements can have unanticipated consequences. One example, from Guru Madhavan’s new book Wicked Problems, is the theft-prevention system installed in Seattle rental cars by a car-sharing company. The system was designed to prevent cars being towed away by thieves. It disabled the cars remotely if they were detected to be moving with the engine off.
But beautiful Seattle is served by numerous ferry services and, in 2017, renters taking the boat found themselves unable to restart their cars when the ferry docked. An anti-theft system in a car caused major delays to a regional ferry system in a way that was obvious in hindsight but hard to foresee.
As the systems engineer Nancy Leveson argues, safety is a property that emerges from how an entire system fits together. The same is true for everyday usability and reliability. Both the ferry fiasco and my problems at the Amsterdam ticket barrier resulted from an unexpected interaction of two systems.
There is a way to switch off the contactless card function, but it is buried deep in the settings menu lest the phone seem too complicated.
Donald Norman argues that a well-designed product should make functions visible and intuitive: users should be able to grasp how it works, what their options are and get feedback about the results of their actions. That is all very wise, but our modern devices have managed to become so intuitive and versatile by concealing from us how they really operate. Laying bare the true complexity of the supercomputers in our pockets would boggle the mind. We cannot be exposed to how these things really work, lest we lose our grasp on reality. (See also: ChatGPT.)
And so we carry around these pocket miracles, and very useful they are too — until something unfortunate happens. It’s brilliant to have your tickets, keys, phone, address book and cash all in the same little box of delights, as long as you don’t drop the box of delights down the toilet. As Earl Wiener put it, “Digital devices tune out small errors while creating opportunities for large errors.”
Next time, I’ll print the ticket.
Written for and first published in the Financial Times on 10 May 2024.
Loyal readers might enjoy the book that started it all, The Undercover Economist.
I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.
May 30, 2024
The lesson of Loki? Trade less
The pages of the Financial Times are not usually a place for legends about ancient gods, but perhaps I can be indulged in sharing one with a lesson to teach us all.
More than a century ago, Odin, All-father, greatest of the Norse gods, went to his wayward fellow god Loki, and put him in charge of the stock market. Odin told Loki that he could do whatever he wanted, on condition that across each and every 30-year period, he ensured that the market would offer average annual returns between 7 and 11 per cent. If he flouted this rule, Odin would tie Loki under a serpent whose fangs would drip poison into Loki’s eyes from now until Ragnarök.
Loki is notoriously malevolent, and no doubt would love to take the wealth of retail investors and set it on fire, if he could. But when faced with such a — shall we say binding? — constraint, what damage could he really do? He could do plenty, says Andrew Hallam, author of Balance and other books about personal finance. Hallam uses the image of Loki as the malicious master of the market to warn us all against squandering the bounties of equity markets.
All Loki would have to do is ensure the market zigged and zagged around unpredictably. Sometimes it would deliver apparently endless bull runs. At other times it would plunge without mercy. It might alternate mini-booms and mini-crashes; it might trade sideways; it might repeat old patterns, or it might do something that seemed quite new. At every moment, the aim would be to trick investors into doing something rash.
None of that would deliver Loki’s goals if we humans weren’t so easy to fool. But we are. You can see the damage in numbers published by the investment research company Morningstar; last year it found a shortfall in annual returns of 1.7 percentage points between what investors make and the performance delivered by the funds in which they invested.
There is nothing strange about investors making a different return from the funds in which they invest. Fund returns are calculated on the basis of a lump-sum buy-and-hold investment. But even the most sober and sensible retail investor is likely to make regular payments, month by month or year by year. As a result, their returns will be different, maybe better and maybe worse.
Somehow, it’s always worse. The gap of 1.7 percentage points a year is huge over the course of a 30-year investment horizon. A 7.2 per cent annual return will multiply your money eightfold over 30 years, but subtract the performance shortfall and you get 5.5 per cent a year, or less than a fivefold return in 30 years.
Why does this happen? The primary reason is that Loki’s mischievous gyrations tempt us to buy when the market is booming and to sell when it’s in a slump. Ilia Dichev, an economist at Emory University, found in a 2007 study that retail investors tended to pile into markets when stocks were doing well, and to sell up when they were languishing. (Without wishing to burden the long-suffering reader with technical details, it turns out that buying high and selling low is a bad investment strategy.)
One possible explanation for this behaviour is that investors are deeply influenced by what they’ve seen the stock market doing across their lives so far. The economists Ulrike Malmendier and Stefan Nagel have found that the lower the returns investors have personally witnessed, the less they are likely to put in the stock market. This means that bear markets scare investors away from their biggest buying opportunities.
Another study, by Brad Barber and Terrance Odean, looked at retail investors in the early 1990s, and found that they traded far too often. Active traders underperformed by more than 6 percentage points annually. Slumbering investors saw a much better performance. The sticker price of making a trade has plummeted since then, of course. Alas, the cost of making a badly timed trade is as high as ever.
Morningstar found that the gap between investment and investor returns is largest for more specialist investments such as sector equity funds or non-traditional equity funds. The gap is smaller for plain vanilla equity and smaller still for allocation funds, which hold a blend of stocks and bonds and automate away investor choices. That suggests that the investors who are trying to be clever are the most likely to fall short, while those who make the fewest possible decisions will lose out by the smallest amount.
I am always hearing that people should be more engaged with investing, and up to a point that is true. People who feel ignorant about how equity investing works and therefore stick their money in a bank account or under a mattress, are avoiding only modest risks and giving up huge potential returns.
But you can have too much of a good thing. Twitchily checking and rearranging your portfolio is a great way to get sucked into poorly timed trades. The irony is that the new generation of investment apps work the same way as almost any other app on your phone: they need your attention and have plenty of ways to get it.
Recent research by the Behavioural Insight Team, commissioned by regulators in Ontario, found that gamified apps — offering unpredictable rewards, leader boards and badges for activity — simply encouraged investors to trade more often. Perhaps Loki was involved in the app development process?
I’ve called this the Investor’s Tragedy. The more attention we pay to our investments, the more we trade, and the cleverer we try to be, the less we will have at the end of it all.
Written for and first published in the Financial Times on 26 January 2024.
My first children’s book, The Truth Detective is now available (not US or Canada yet – sorry).
I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.
May 23, 2024
Cautionary Tales – When the robots take over… Cautionary Questions
Tim Harford is joined by Jacob Goldstein to answer your questions. Does winning the lottery make you unhappy? Is Bitcoin bad for the economy? When does correlation imply causation? And what will Tim and Jacob do when the robot overlords come for their jobs?
Fossil fuels could have been left in the dust 25 years ago
Gordon Moore’s famous prediction about computing power must count as one of the most astonishingly accurate forecasts in history. But it may also have been badly misunderstood — in a way that now looks like a near-catastrophic missed opportunity. If we had grasped the details behind Moore’s Law in the 1980s, we could be living with an abundance of clean energy by now. We fumbled it.
A refresher on Moore’s Law: in 1965, electronics engineer Gordon Moore published an article noting that the number of components that could efficiently be put on an integrated circuit was roughly doubling every year. “Over the short term this rate can be expected to continue, if not increase,” he wrote. “There is no reason to believe it will not remain nearly constant for at least 10 years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”
That component number is now well into the billions. Moore adjusted his prediction in 1975 to doubling every two years, and the revised law has remained broadly true ever since, not only for the density of computer components but for the cost, speed and power consumption of computation itself. The question is, why?
The way Moore formulated the law, it was just something that happened: the sun rises and sets, the leaves that are green turn to brown, and computers get faster and cheaper.
But there’s another way to describe technological progress, and it might be better if we talked less about Moore’s Law, and more about Wright’s Law. Theodore Wright was an aeronautical engineer who, in the 1930s, published a Moore-like observation about aeroplanes: they were getting cheaper in a predictable way. Wright found that the second of any particular model of aeroplane would be 20 per cent cheaper to make than the first, the fourth would be 20 per cent cheaper than the second, and every time cumulative production doubled, the cost of making an additional unit would drop by a further 20 per cent.
A key difference is that Moore’s Law is a function of time, but Wright’s Law is a function of activity: the more you make, the cheaper it gets. What’s more, Wright’s Law applies to a huge range of technologies: what varies is the 20 per cent figure. Some technologies resist cost improvements. Others, such as solar photovoltaic modules, become much cheaper as production ramps up.
In a new book, Making Sense of Chaos, the complexity scientist Doyne Farmer points out that both Moore’s Law and Wright’s Law provide a good basis for forecasting the costs of different technologies. Both nicely describe the patterns that we see in the data. But which one is closer to identifying the underlying causes of these patterns? Moore’s Law suggests that products get cheaper over time, and because they are cheaper they then are demanded and produced in larger quantities. Wright’s Law suggests that rather than falling costs spurring production, it’s mass production that causes costs to fall.
And therein lies the missed opportunity. We acted as though Moore’s Law governed the cost of photovoltaics. While there were of course subsidies for solar PV in countries such as Germany, the default view was that it was too expensive to be much use as a large-scale power source, so we should wait and hope that it would eventually become cheap. If instead we had looked through the lens of Wright’s Law, governments should have been falling over themselves to buy or otherwise subsidise expensive solar PV, because the more we bought, the faster the price would fall.
PV is now so cheap that the question is moot. Yet if we had acted more boldly 40 years ago, solar PV might have been cheap enough to put fossil fuels out of business at the turn of the millennium.
That, of course, presupposes that Wright’s Law really does apply. It might not. Perhaps technological progress depends more on a stream of results from university research labs, and cannot be rushed — in which case, patience is the relevant virtue and a huge splurge on new technologies would be a waste of money.
So — Moore’s Law, or Wright’s Law? Farmer and his colleagues Diana Greenwald and François Lafond turned to the second world war for data. After 1939, the US vastly expanded production of military hardware, from radar to blankets. We can be confident that this was because of the wartime needs of the US and its allies, not because President Roosevelt noticed that tank manufacturers were offering some great discounts. Across a large range of products, Farmer, Greenwald and Lafond found that Wright’s Law explained about half of the fall in production costs during the war.
As Farmer writes, “we can say with some confidence that increasing cumulative production can drive prices down, even if this is not the full story”. Buy more, and they get cheaper.
Wright’s Law isn’t magic, and although it seems to apply to many products, it is rare for the price declines on offer to be as spectacular as those for aeroplanes, solar PV and computer chips. Still, where the data suggest that Wright’s Law holds strongly, governments can drive down prices by subsidising production or demand, one way or another. The individual incentive, after all, is to be a late adopter.
Moore himself saw his own prediction as a challenge, and co-founded the chip manufacturer Intel. Ironically, Moore seems to have been more of a follower of Wright’s Law. Moore’s Law suggests that good things come to those who wait. Wright’s Law says that good things come to those who act.
Written for and first published in the Financial Times on 26 April 2024.
My first children’s book, The Truth Detective is now available (not US or Canada yet – sorry).
I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.


