Algorithms to Live By: The Computer Science of Human Decisions
Rate it:
Open Preview
Kindle Notes & Highlights
2%
Flag icon
The nature of serial monogamy, writ large, is that its practitioners are confronted with a fundamental, unavoidable problem. When have you met enough people to know who your best match is? And what if acquiring the data costs you that very match? It seems the ultimate Catch-22 of the heart.
5%
Flag icon
We asked Shoup if his research allows him to optimize his own commute, through the Los Angeles traffic to his office at UCLA. Does arguably the world’s top expert on parking have some kind of secret weapon? He does: “I ride my bike.”
Rob
LOL
Manifest Stefany liked this
6%
Flag icon
We intuitively understand that life is a balance between novelty and tradition, between the latest and the greatest, between taking risks and savoring what we know and love. But just as with the look-or-leap dilemma of the apartment hunt, the unanswered question is: what balance?
8%
Flag icon
For every slot machine we know little or nothing about, there is some guaranteed payout rate which, if offered to us in lieu of that machine, will make us quite content never to pull its handle again. This number—which Gittins called the “dynamic allocation index,” and which the world now knows as the Gittins index—suggests an obvious strategy on the casino floor: always play the arm with the highest index.
9%
Flag icon
The Gittins index, then, provides a formal, rigorous justification for preferring the unknown, provided we have some opportunity to exploit the results of what we learn from exploring.
9%
Flag icon
The old adage tells us that “the grass is always greener on the other side of the fence,” but the math tells us why: the unknown has a chance of being better, even if we actually expect it to be no different, or if it’s just as likely to be worse.
12%
Flag icon
If the probabilities of a payoff on the different arms change over time—what has been termed a “restless bandit”—the problem becomes much harder. (So much harder, in fact, that there’s no tractable algorithm for completely solving it, and it’s believed there never will be.)
12%
Flag icon
if you treat every decision as if it were your last, then indeed only exploitation makes sense.
13%
Flag icon
Whether it’s finding the largest or the smallest, the most common or the rarest, tallying, indexing, flagging duplicates, or just plain looking for the thing you want, they all generally begin under the hood with a sort.
13%
Flag icon
This is the economy of scale familiar to any business student. But with sorting, size is a recipe for disaster: perversely, as a sort grows larger, “the unit cost of sorting, instead of falling, rises.”
14%
Flag icon
Big-O notation has a particular quirk, which is that it’s inexact by design. That is, rather than expressing an algorithm’s performance in minutes and seconds, Big-O notation provides a way to talk about the kind of relationship that holds between the size of the problem and the program’s running time.
16%
Flag icon
Sorting something that you will never search is a complete waste; searching something you never sorted is merely inefficient.
Rob
RE: "Should you even sort at all?"
18%
Flag icon
Displacement happens when an animal uses its knowledge of the hierarchy to determine that a particular confrontation simply isn’t worth it.
18%
Flag icon
decentralized sorting
Rob
i.e., pecking orders
24%
Flag icon
This is something of a theme in computer science: before you can have a plan, you must first choose a metric.
25%
Flag icon
What if it’s an optimal solution to the wrong problem?
25%
Flag icon
pre-crastinate, a term we introduce to refer to the hastening of subgoal completion, even at the expense of extra physical effort.”
25%
Flag icon
Live by the metric, die by the metric.
25%
Flag icon
As Hedberg explains, “If you’re flammable and have legs, you are never blocking a fire exit.”
Rob
[Mitch] Hedburg
26%
Flag icon
most scheduling problems admit no ready solution.
27%
Flag icon
This is thrashing: a system running full-tilt and accomplishing nothing at all. Denning first diagnosed this phenomenon in a memory-management context, but computer scientists now use the term “thrashing” to refer to pretty much any situation where the system grinds to a halt because it’s entirely preoccupied with metawork.
28%
Flag icon
Methods such as “timeboxing” or “pomodoros,” where you literally set a kitchen timer and commit to doing a single task until it runs out, are one embodiment of this idea.
31%
Flag icon
Predicting that a 90-year-old man will live to 180 years seems unreasonable precisely because we go into the problem already knowing a lot about human life spans—and so we can do better. The richer the prior information we bring to Bayes’s Rule, the more useful the predictions we can get out of it.
32%
Flag icon
The reason we can often make good predictions from a small number of observations—or just a single one—is that our priors are so rich.
34%
Flag icon
Fundamentally, overfitting is a kind of idolatry of data, a consequence of focusing on what we’ve been able to measure rather than what matters.
35%
Flag icon
the ruthless and clever optimization of the wrong thing.
36%
Flag icon
If we introduce a complexity penalty, then more complex models need to do not merely a better job but a significantly better job of explaining the data to justify their greater complexity. Computer scientists refer to this principle—using constraints that penalize models for their complexity—as Regularization.
36%
Flag icon
faced with the complexity of real life, he abandoned the rational model and followed a simple heuristic.
39%
Flag icon
When an optimization problem’s constraints say “Do it, or else!,” Lagrangian Relaxation replies, “Or else what?”
46%
Flag icon
postcards moving at the speed of light.
52%
Flag icon
In a game-theory context, knowing that an equilibrium exists doesn’t actually tell us what it is—or how to get there.