More on this book
Community
Kindle Notes & Highlights
But in most real-life scenarios, we’re interacting with 8 billion other people; their choices affect ours, and ours affect theirs. And that’s much harder. We’re trying to live our best life, but they are too. What’s the equilibrium that emerges when everyone pursues their best strategy? That’s what game theory is all about.
Or maybe he was lying about having bluffed. The point is, an “understanding” isn’t worth very much when there’s no trust established, no enforcement mechanism, and each player would benefit from screwing the other guy over.
Although solvers might seem like a natural fit for Selbst, whose mother drilled her on logic puzzles when she was growing up in Brooklyn, she’s instead a deep believer in exploitative play: figure out what your opponents are doing wrong and take advantage of it.
In one experiment, for example, Coates studied the testosterone levels of a group of traders at a high-frequency London trading shop. He found that their testosterone was significantly higher following days when they’d made an above-average profit. But the converse was also true. Coates had also been checking their testosterone in the morning and discovered that they had substantially better trading days when they woke up with higher T levels. Higher testosterone predicted more trading success, in other words. More testosterone, more profit. What could possibly go wrong?
The bodily transformation that Coates’s traders experienced made for a positive feedback loop. They’d have a winning day, build up more testosterone, and take on more risk. Because most traders start out being too risk-averse, at first this helped; they were getting closer to the optimal, profit-maximizing level of risk. So they’d have more winning days, get yet more T, and take on yet more risk. You can probably guess what happened next. Before long, they were the equivalent of steroid-infused meatheads, barreling right on past optimal to dangerous, potentially catastrophic, Sam Bankman-Fried
...more
Coates believes that much of what accounts for the “irrational exuberance” that makes for financial bubbles are these biological factors. Traders can experience euphoria. A bull market “releases cortisol, and in combination with dopamine, one of the most addictive drugs known to the human brain, it delivers a narcotic hit, a rush, a flow that convinces traders there is no other job in the world.”
In fact, Coates’s studies found, the most successful traders had more changes in their body chemistry in response to risk. “We were finding this in the very best traders. Their endocrine response was opposite of what I expected when I went in,” he said. “You would think that someone who’s really under control would have a very muted physiological reaction to taking risk. But in fact, it’s the other way around.”
That “in the zone” risk response I described earlier feels like you’ve entered one of those dreamlike soap operas with a higher frame rate. You’re fully immersed in the task at hand, with a heightened attention to detail and mastery that comes from a deeply intuitive place—you know what to do without “thinking” about it. It’s not a sense of calm so much as a sense of clarity. When I think about the times I’ve been in a zone or flow state, it’s often been in response to stress. It’s happened at big moments in poker tournaments, occasionally during public speaking appearances, and even a couple
...more
In poker, you have to be comfortable turning your subjective feelings into probabilities and acting on them. And in other enterprises involving risk, you have to be willing to make at least a good first-pass estimate. “It’s not like you get a spreadsheet out and actually try to mathematically determine that, because I think there’s a hubris of precision,” said Vescovo of his process for estimating the dangers when flying fighter jets or climbing mountains. “But there certainly are tendencies of this is really dangerous or that is not.”
Successful risk-takers are conscientiously contrarian. They have theories about why and when the conventional wisdom is wrong.
“The thing about hedge funds, it’s the only industry in the world where not only do you have to be right, but everybody else also has to be wrong,” said Galen Hall, who earned more than $4 million from poker tournaments between 2011 and 2015 but nevertheless left to work for Bridgewater Associates (he was the one who recruited his friend Selbst there). “Where[as] if you’re a doctor, if you go by the book, you do the accepted thing, you do the convention every single time, you’re a pretty fucking good doctor, right?”
“Every single thing we do, I can point to, like, ‘here is the person who was doing a thing wrong,’ ” said Hall. “We build a map of all of the players in the world…who are trading for reasons other than they want to generate some alpha.[*15] You know, somebody got added to the S&P 500. So now all the S&P 500 ETFs out there have to buy this company. Or it’s a CEO of a startup, he’s now a billionaire. He has ninety-nine percent of his net worth in the company. He’s allowed to sell his shares on this day [and] he’s probably going to sell a bunch.”
So be a conscientious contrarian—look for flaws in people’s incentives rather than their intelligence—and then seek out a place where your own incentives are well-aligned with your goals.
But on the “personality cluster”—competitiveness, risk tolerance, independent-mindedness often to the point of contrarianism—Silicon Valley is off the charts, even compared to Wall Street. And it is quite proud of this. “We believe in embracing variance, in increasing interestingness,” wrote Marc Andreessen, the Netscape cofounder turned VC whose egg-shaped head is synonymous in the Valley with hard-boiled, stubborn resolve, in his October 2023 “Techno-Optimist Manifesto.” “We believe in risk,” he wrote, italicizing the word “risk,” “in leaps into the unknown.”
So here’s my theory of the secret to Silicon Valley’s success. It marries risk-tolerant VCs like Moritz with risk-ignorant founders like Musk: a perfect pairing of foxes and hedgehogs. The founders may take risks that are in some sense irrational, not because the payoff isn’t there but because of diminishing marginal returns. (If you had a net worth of $1 million, would you gamble it all on a 1-in-50 chance of winning $200 million—and a 98 percent chance of having to start over from scratch? The EV of the bet is +$3 million, but I probably wouldn’t.) But if the VCs can herd enough hedgehogs
...more
The Village is about group allegiance, while Silicon Valley is individualistic.
Don’t get me wrong: I think expertise is badly needed in society and that we’d drive ourselves crazy if we didn’t defer to the expert consensus most of the time. But increasingly “trust the experts” or “trust the science” is used as a political cudgel, such as during many controversies about the COVID-19 pandemic. Meanwhile, as David Shor found out, citing the experts isn’t always welcome if it doesn’t match the Village’s political objectives. Something has changed when skepticism—something I’d always thought of as the province of liberals—is instead being championed by conservatives like
...more
If this sounds like a self-fulfilling prophecy, that’s exactly the point. Grand Central is what Schelling calls a “focal point.” Although a focal point is an important idea in game theory, it’s more intuitive than something like the prisoner’s dilemma—so don’t worry, we’re not going to need any more of those 2 × 2 matrices. In the “game” that we’re playing—Schelling won the Nobel Prize partly for extending game theory from zero-sum games to those where players may benefit from cooperation, such as in avoiding a nuclear war—we both win if we find somewhere to meet and we both lose if we don’t.
...more
“[SBF] was the kind of person that Silicon Valley wanted to believe would take over the market,” said Haseeb Qureshi, a managing partner at the crypto fund Dragonfly. “He was an MIT graduate guy with crazy hair, who kind of said all the right things. He was totally nonconformist and countercultural and they’re like, oh, that’s who should be running the crypto market.”
The first time this really sank in with me was during COVID. Emily Oster, a Brown economist who writes the newsletter ParentData, drew tremendous criticism during COVID for suggesting that people had to work out their COVID routines through cost-benefit analysis rather than treating the coronavirus as a death sentence that they needed to avoid at all costs.
Is this way of thinking about the world—quantifying hard-to-quantify things, engaging in cost-benefit analysis in situations where people might not think to apply it—unique to effective altruism? No. It’s common everywhere in the River, a hallmark of what I described in the introduction as decoupling, meaning the propensity to analyze an issue divorced from its larger context.
It is tragic that people gave more than $500 million to the Harvard endowment in fiscal year 2022 when it was already worth more than $50 billion—instead of giving to real charities. (Seriously, don’t give a cent to the endowment of Harvard or another elite private college.) GiveWell—founded by Holden Karnofsky and Elie Hassenfeld, alumni of the hedge fund Bridgewater Associates who were inspired by Singer and shocked to discover how little information there was about how effective charities were at meeting their goals—is a good place to start when looking for alternatives.
This is the principle of impartiality, the idea that we ought to regard people further removed from us (an unknown child in India) as being just as morally worthy as a drowning child in our hometown—or even as worthy as our own children.
In MacAskill’s book What We Owe the Future, it is also extended to future people: someone born in the year 3024 is just as valuable as an infant born today. (This idea, called “longtermism,” is controversial even among EAs.) Singer has also suggested that impartiality should be extended to artificial intelligences that achieve sentience.
Having built quite a few statistical models myself, I know that it’s hard to do, in part because there are two sorts of errors one can make. This is slightly technical—there’s a longer discussion in The Signal and the Noise if you want to go deeper—but one problem is called “overfitting.” Basically, that means trying to accommodate every nook and cranny in the dataset, sometimes through highly contrived strategies.
First, there’s instrumental rationality. Basically this means: Do you adopt means suitable to your ends?
The second type is epistemic rationality. This means: Do you see the world for what it is? Do your beliefs line up with reality?
Why? Nobody seems to know exactly. In English, adjectives denoting physical attributes like speed come before color words; those are just the rules. Native speakers learn them instinctively. They’re programmed into our System 1—and after enough training, ChatGPT learns them too. “What’s really incredible about unsupervised learning is you don’t need to do any human feature engineering, the features are already there,” said Ryder. This surprised a lot of machine learning researchers—and it surprised me.
Put another way, it’s not that machines can do things more powerfully than humans that’s disconcerting. That’s been true since we began inventing machines; humanity is not threatened because a Chevy Bolt is faster than Usain Bolt. Rather, it’s that we’ve never invented a machine that worked so well but known so little about how it worked.
To some people, this might be okay. “The stuff in the Old Testament is weird and harsh, man. You know, it’s hard to vibe with. But as a Christian, I gotta take it,” said Jon Stokes, an AI scholar with accelerationist sympathies who is one of relatively few religious people in the field. “In some ways, actually, the deity is the original unaligned superintelligence. We read this and we’re like, man, why did he kill all those people? You know, it doesn’t make a lot of sense. And then your grandmother’s like, the Lord works in mysterious ways. The AGI will work in mysterious ways [too].”
The bare facts are these: (1) ever since Google’s landmark transformer paper, AI has been progressing at a much faster rate than nearly anybody save for Yudkowsky expected; (2) Silicon Valley is flooring the accelerator—Altman reportedly wants to raise $7 trillion for new facilities to manufacture semiconductor chips; (3) and yet, the world’s leading AI researchers don’t even understand very much about how any of this works. It is not only rational to have some fear about this; it would be irresponsible not to have some.
Meanwhile, during COVID, the most recent acute global crisis, the world performed miserably. I’m not one of those people who thinks you could have tweaked one or two things and prevented the pandemic. But even with every incentive to get it right,[*42] we got it nearly all wrong, winding up with a worst-of-all-possible-worlds outcome of both a massive death toll and unprecedented constraints on liberty, well-being, and economic activity—and we can barely lift a finger to prevent the next pandemic.
“People don’t take guillotines seriously. But historically, when a tiny group gains a huge amount of power and makes life-altering decisions for a vast number of people, the minority gets actually, for real, killed.”
Perhaps we should view Las Vegas as Erving Goffman saw it, as a last resort for unfulfilled demand for risk that might once have been channeled elsewhere. The poker room might not be the most productive outlet for the River’s energies, but at least your downside there is limited to the table stakes. Silicon Valley’s inventions are more of a gamble for all of us, however. My concern, as I observed at the beginning of our tour, is that our risk preferences have become bifurcated. Instead of a bell curve of risk-taking where most people are somewhere toward the middle, you have Musk at one
...more
The words in my motto are less familiar, but I’ve chosen them for their precision: agency, plurality, and reciprocity. Agency is a term I just defined in the last chapter, so I’ll repeat that definition here: it refers not merely to having options but having good options where the costs and benefits are transparent, don’t require overcoming an undue amount of friction, and don’t risk entrapping you in an addictive spiral.
Plurality means not letting any one person, group, or ideology gain a dominant share of power. Gamblers know this concept; the most successful sports bettors, like Billy Walters, seek advice from a variety of human experts and computer models before placing their bets. Looking for consensus is nearly always more robust than assuming that any one model is good enough to beat the spread.
Finally, there is reciprocity. This is the most Riverian principle of all, since it flows directly from game theory. Treat other people as intelligent and capable of reasonable strategic behavior. The world is dynamic, and although people may not be strictly rational, they’re usually smart about adapting to their situation and achieving the things that matter most to them. Play the long game.

