More on this book
Community
Kindle Notes & Highlights
Read between
January 11 - March 5, 2023
The same challenge also appears in an even more fraught setting: dating. Optimal stopping is the science of serial monogamy.
As Carl Sagan put it, “Science is a way of thinking much more than it is a body of knowledge.”
When balancing favorite experiences and new ones, nothing matters as much as the interval over which we plan to enjoy them.
A sobering property of trying new things is that the value of exploration, of finding a new favorite, can only go down over time, as the remaining opportunities to savor it dwindle.
The next time you open your browser, you can be sure that the colors, images, text, perhaps even the prices you see—and certainly the ads—have come from an explore/exploit algorithm, tuning itself to your clicks. In this particular multi-armed bandit problem, you’re not the gambler; you’re the jackpot.
turn the library inside out. Put acquisitions in the back, for those who want to find them. And put the most recently returned items in the lobby, where they are ripe for the browsing.
I disagree. If the sole purpose of a library was to give customers books as fast as possible, then maybe yes. I think the library's purpose is nobler, though. To aid in discovery, to help readers find new ideas and new authors. Placing the most recently used books back in their shelves forces me, the patron, to walk through the library and search to find that title, increasing my chance of discovering something new along the way.
“Some things that might seem frustrating as we grow older (like remembering names!) are a function of the amount of stuff we have to sift through … and are not necessarily a sign of a failing mind.” As he puts it, “A lot of what is currently called decline is simply learning.”
give a system an overwhelming number of trivial things to do, and the important things get lost in the chaos.
The comedian Mitch Hedberg recounts a time when “I was at a casino, I was minding my own business, and this guy came up and said, ‘You’re gonna have to move, you’re blocking the fire exit.’ As though if there was a fire, I wasn’t gonna run.” The bouncer’s argument was priority inversion;
“It really is true that the company will build whatever the CEO decides to measure.”
The perfect is the enemy of the good. —VOLTAIRE
Temporarily removing constraints, as in the minimum spanning tree and the “what if you won the lottery?” examples, is the most straightforward form of algorithmic relaxation.
“Technically, you don’t have to do anything,” his mother replied. “You don’t have to do what your teachers tell you. You don’t have to do what I tell you. You don’t even have to obey the law. There are consequences to everything, and you get to decide whether you want to face those consequences.” Brian’s kid-mind was blown. It was a powerful message, an awakening of a sense of agency, responsibility, moral judgment. It was something else, too: a powerful computational technique called Lagrangian Relaxation.
When an optimization problem’s constraints say “Do it, or else!,” Lagrangian Relaxation replies, “Or else what?” Once we can color outside the lines—even just a little bit, and even at a steep cost—problems become tractable that weren’t tractable before.
Walpole coined the term “serendipity,” based on the fairy tale adventures of The Three Princes of Serendip (Serendip being the archaic name of Sri Lanka), who “were always making discoveries, by accidents and sagacity, of things they were not in quest of.”
At the heart of TCP congestion control is an algorithm called Additive Increase, Multiplicative Decrease, or AIMD.
Companies speak of “flat” hierarchies and “tall” hierarchies, but they might consider speaking of dynamic ones.
In fact, it’s now clear that the cause and effect are often the reverse: a poor listener destroys the tale.
We use the idiom of “dropped balls” almost exclusively in a derogatory sense, implying that the person in question was lazy, complacent, or forgetful. But the tactical dropping of balls is a critical part of getting things done under overload.
The most prevalent critique of modern communications is that we are “always connected.” But the problem isn’t that we’re always connected; we’re not. The problem is that we’re always buffered. The difference is enormous.
You go on vacation and come home to a mountain of correspondence. It used to be that people knocked on your door, got no response, and went away. Now they’re effectively waiting in line when you come home.
The much-lamented “lack of idleness” one reads about is, perversely, the primary feature of buffers: to bring average throughput up to peak throughput. Preventing idleness is what they do. You check email from the road, from vacation, on the toilet, in the middle of the night. You are never, ever bored. This is the mixed blessing of buffers, operating as advertised.
All employees want, in theory, to take as much vacation as possible. But they also all want to take just slightly less vacation than each other, to be perceived as more loyal, more committed, and more dedicated (hence more promotion-worthy). Everyone looks to the others for a baseline, and will take just slightly less than that. The Nash equilibrium of this game is zero.
“The worry that people will leave relationships because it may later become rational for them to do so is largely erased if it is not rational assessment that binds them in the first place.”
What’s more, being able to fall involuntarily in love makes you, in turn, a more attractive partner to have. Your capacity for heartbreak, for sleeping with the emotional fishes, is the very quality that makes you such a trusty accomplice.
Marriage and love -- as changing the stakes of the prisoner's dilemma -- is such a bold take. Ideas like this are WHY I read. Delicious.
An enormously influential paper by the economists Sushil Bikhchandani, David Hirshleifer, and Ivo Welch has demonstrated that under the right circumstances, a group of agents who are all behaving perfectly rationally and perfectly appropriately can nonetheless fall prey to what is effectively infinite misinformation. This has come to be known as an “information cascade.”
Information cascades offer a rational theory not only of bubbles, but also of fads and herd behavior more generally. They offer an account of how it’s easily possible for any market to spike and collapse, even in the absence of irrationality, malevolence, or malfeasance.
The takeaways are several. For one, be wary of cases where public information seems to exceed private information, where you know more about what people are doing than why they’re doing it, where you’re more concerned with your judgments fitting the consensus than fitting the facts. When you’re mostly looking to others to set a course, they may well be looking right back at you to do the same.
Last, we should remember from the prisoner’s dilemma that sometimes a game can have irredeemably lousy rules. There may be nothing we can do once we’re in it, but the theory of information cascades may help us to avoid such a game in the first place. And if you’re the kind of person who always does what you think is right, no matter how crazy others think it is, take heart. The bad news is that you will be wrong more often than the herd followers. The good news is that sticking to your convictions creates a positive externality, letting people make accurate inferences from your behavior. There
...more
Even the best strategy sometimes yields bad results—which is why computer scientists take care to distinguish between “process” and “outcome.” If you followed the best possible process, then you’ve done all you can, and you shouldn’t blame yourself if things didn’t go your way. Outcomes
Likewise, seemingly innocuous language like “Oh, I’m flexible” or “What do you want to do tonight?” has a dark computational underbelly that should make you think twice. It has the veneer of kindness about it, but it does two deeply alarming things. First, it passes the cognitive buck: “Here’s a problem, you handle it.” Second, by not stating your preferences, it invites the others to simulate or imagine them. And as we have seen, the simulation of the minds of others is one of the biggest computational challenges a mind (or machine) can ever face.
The first player writes down as many unique positive numbers on different slips of paper as he wishes. Then he shuffles them and turns them over one at a time. If the second player tells him to stop at a certain slip and the number on that slip is the largest number in the collection then the second player wins. If not, the first player wins.