Algorithms to Live By: The Computer Science of Human Decisions
Rate it:
Open Preview
3%
Flag icon
Assuming that his search would run from ages eighteen to forty, the 37% Rule gave age 26.1 years as the point at which to switch from looking to leaping.
3%
Flag icon
straightforward mathematical solution: propose early and often.
3%
Flag icon
the 37% Rule says you should start making offers after just a quarter of your search.
4%
Flag icon
Full information means that we don’t need to look before we leap. We can instead use the Threshold Rule,
4%
Flag icon
But the lessons to be learned from optimal stopping aren’t limited to dating or hiring. In fact, trying to make the best choice when options only present themselves one by one is also the basic structure of selling a house, parking a car, and quitting when you’re ahead.
7%
Flag icon
When balancing favorite experiences and new ones, nothing matters as much as the interval over which we plan to enjoy them.
7%
Flag icon
the value of exploration, of finding a new favorite, can only go down over time,
7%
Flag icon
the value of exploitation can only go up over time.
9%
Flag icon
“To try and fail is at least to learn; to fail to try is to suffer the inestimable loss of what might have been.”
9%
Flag icon
Regret is the result of comparing what we actually did with what would have been best in hindsight.
9%
Flag icon
an Upper Confidence Bound algorithm doesn’t care which arm has performed best so far; instead, it chooses the arm that could reasonably perform best in the future.
10%
Flag icon
Following the advice of these algorithms, you should be excited to meet new people and try new things—to assume the best about them, in the absence of evidence to the contrary. In the long run, optimism is the best prevention for regret.
12%
Flag icon
If the probabilities of a payoff on the different arms change over time—what has been termed a “restless bandit”—the problem becomes much harder.
12%
Flag icon
when the world can change, continuing to explore can be the right choice.
12%
Flag icon
if you treat every decision as if it were your last, then indeed only exploitation makes sense. But over a lifetime, you’re going to make a lot of decisions. And it’s actually rational to emphasize exploration—the new rather than the best, the exciting rather than the safe, the random rather than the considered—for many of those choices, particularly earlier in life.
12%
Flag icon
This process seems to be a deliberate choice: as people approach the end of their lives, they want to focus more on the connections that are the most meaningful.
12%
Flag icon
The deliberate honing of a social network down to the most meaningful relationships is the rational response to having less time to enjoy them.
13%
Flag icon
The Gittins index and the Upper Confidence Bound, as we’ve seen, inflate the appeal of lesser-known options beyond what we actually expect, since pleasant surprises can pay off many times over. But at the same time, this means that exploration necessarily leads to being let down on most occasions.
16%
Flag icon
Sorting something that you will never search is a complete waste; searching something you never sorted is merely inefficient.
18%
Flag icon
This move from “ordinal” numbers (which only express rank) to “cardinal” ones (which directly assign a measure to something’s caliber) naturally orders a set without requiring pairwise comparisons.
18%
Flag icon
Having a benchmark—any benchmark—solves the computational problem of scaling up a sort.
19%
Flag icon
In the 1990s this began to be known as the “memory wall.” Computer science’s best defense against hitting that wall has been an ever more elaborate hierarchy: caches for caches for caches, all the way down.
19%
Flag icon
this making of room is called “cache replacement” or “cache eviction.”
19%
Flag icon
The hypothetical all-knowing, prescient algorithm that would look ahead and execute the optimal policy is known today in tribute as Bélády’s Algorithm. Bélády’s Algorithm is an instance of what computer scientists call a “clairvoyant” algorithm: one informed by data from the future.
20%
Flag icon
LRU consistently performed the closest to clairvoyance. The LRU principle is effective because of something computer scientists call “temporal locality”:
21%
Flag icon
Recently, Amazon was granted a patent for an innovation that pushes this principle one step further. The patent talks about “anticipatory package shipping,” which the press seized upon as though Amazon could somehow mail you something before you bought it.
21%
Flag icon
Their patent is actually for shipping items that have been recently popular in a given region to a staging warehouse in that region—like having their own CDN for physical goods.
21%
Flag icon
Anticipating the purchases of individuals is challenging, but when predicting the purchases of a few thousand people, the law of large numbers kicks in.
22%
Flag icon
His results mapped out a graph of how memory fades over time, known today by psychologists as “the forgetting curve.”
22%
Flag icon
According to his theory, the mind has essentially infinite capacity for memories, but we have only a finite amount of time in which to search for them.
22%
Flag icon
If the pattern by which things fade from our minds is the very pattern by which things fade from use around us, then there may be a very good explanation indeed for the Ebbinghaus forgetting curve—namely, that it’s a perfect tuning of the brain to the world, making available precisely the things most likely to be needed.
23%
Flag icon
suggested that what we call “cognitive decline”—lags and retrieval errors—may not be about the search process slowing or deteriorating, but (at least partly) an unavoidable consequence of the amount of information we have to navigate getting bigger and bigger.
23%
Flag icon
The effort of retrieval is a testament to how much you know. And the rarity of those lags is a testament to how well you’ve arranged it: keeping the most important things
25%
Flag icon
This offers a radical way to rethink procrastination, the classic pathology of time management. We typically think of it as a faulty algorithm. What if it’s exactly the opposite? What if it’s an optimal solution to the wrong problem?
25%
Flag icon
Putting off work on a major project by attending instead to various trivial matters can likewise be seen as “the hastening of subgoal completion”—
25%
Flag icon
The culprit was a classic scheduling hazard called priority inversion.
26%
Flag icon
If you’re working by Earliest Due Date and the new task is due even sooner than the current one, switch gears; otherwise stay the course. Likewise, if you’re working by Shortest Processing Time, and the new task can be finished faster than the current one, pause to take care of it first;
26%
Flag icon
It turns out, though, that even if you don’t know when tasks will begin, Earliest Due Date and Shortest Processing Time are still optimal strategies, able to guarantee you (on average) the best possible performance in the face of uncertainty. If assignments get tossed on your desk at unpredictable moments, the optimal strategy for minimizing maximum lateness is still the preemptive version of Earliest Due Date—switching to the job that just came up if it’s due sooner than the one you’re currently doing,
26%
Flag icon
Intriguingly, optimizing all these other metrics is intractable if we know the start times and durations of jobs ahead of time. So considering the impact of uncertainty in scheduling reveals something counterintuitive: there are cases where clairvoyance is a burden.
27%
Flag icon
Humans clearly have context-switching costs too.
27%
Flag icon
if a task requires keeping track of so many things that they won’t all fit into memory, then you might well end up spending more time swapping information in and out of memory than doing the actual work.
27%
Flag icon
This is thrashing: a system running full-tilt and accomplishing nothing at all.
27%
Flag icon
Another way to avert thrashing before it starts is to learn the art of saying no. Denning advocated, for instance, that a system should simply refuse to add a program to its workload if it didn’t have enough free memory to hold its working set.
28%
Flag icon
Part of what makes real-time scheduling so complex and interesting is that it is fundamentally a negotiation between two principles that aren’t fully compatible. These two principles are called responsiveness and throughput:
28%
Flag icon
The moral is that you should try to stay on a single task as long as possible without decreasing your responsiveness below the minimum acceptable limit.
28%
Flag icon
“interrupt coalescing.”
28%
Flag icon
In academia, holding office hours is a way of coalescing interruptions from students.
28%
Flag icon
Donald Knuth. “I do one thing at a time,” he says. “This is what computer scientists call batch processing
30%
Flag icon
The mathematical formula that describes this relationship, tying together our previously held ideas and the evidence before our eyes, has come to be known—ironically, as the real heavy lifting was done by Laplace—as Bayes’s Rule.
30%
Flag icon
the chances for each hypothesis to have been true before you saw any data—is known as the prior probabilities, or “prior” for short. And Bayes’s Rule always needs some prior from you, even if it’s only a guess.
« Prev 1 3