Algorithms to Live By: The Computer Science of Human Decisions
Rate it:
Open Preview
25%
Flag icon
When applied to debts rather than incomes, the same principle yields a strategy for getting in the black that’s come to be called the “debt avalanche.” This debt-reduction strategy says to ignore the number and size of your debts entirely, and simply funnel your money toward the debt with the single highest interest rate. This corresponds rather neatly to working through jobs in order of importance per unit time. And it’s the strategy that will reduce the total burden of your debt as quickly as possible. If, on the other hand, you’re more concerned with reducing the number of debts than the ...more
25%
Flag icon
Working on a computer brings with it an additional hazard when it comes to being conscious and deliberate about our scheduling metrics: the user interface may subtly (or not so subtly) force its own metric upon us.
25%
Flag icon
What happens in a priority inversion is that a low-priority task takes possession of a system resource (access to a database, let’s say) to do some work, but is then interrupted partway through that work by a timer, which pauses it and invokes the system scheduler. The scheduler tees up a high-priority task, but it can’t run because the database is occupied. And so the scheduler moves down the priority list, running various unblocked medium-priority tasks instead—rather than the high-priority one (which is blocked), or the low-priority one that’s blocking it (which is stuck in line behind all ...more
25%
Flag icon
Priority inheritance. If a low-priority task is found to be blocking a high-priority resource, well, then all of a sudden that low-priority task should momentarily become the highest-priority thing on the system, “inheriting” the priority of the thing it’s blocking.
25%
Flag icon
The bouncer’s argument was priority inversion; Hedberg’s rebuttal was priority inheritance. Hedberg lounging casually in front of a fleeing mob puts his low-priority loitering ahead of their high-priority running for their lives—but not if he inherits their priority.
25%
Flag icon
“Things which matter most must never be at the mercy of things which matter least,” Goethe allegedly proclaimed; but while that has the ring of wisdom about it, sometimes it’s just not true.
25%
Flag icon
When a certain task can’t be started until another one is finished, scheduling theorists call that a “precedence constraint.”
26%
Flag icon
Lawler himself would soon discover that this problem belongs to a class that most computer scientists believe has no efficient solution—it’s what the field calls “intractable.”
26%
Flag icon
In scheduling, it’s clear by definition that every set of tasks and constraints has some schedule that’s the best, so scheduling problems aren’t unanswerable, per se—but it may simply be the case that there’s no straightforward algorithm that can find you the optimal schedule in a reasonable amount of time.
26%
Flag icon
What the researchers found was that even the subtlest change to a scheduling problem often tips it over the fine and irregular line between tractable and intractable.
26%
Flag icon
Nonetheless, the algorithms we have discussed are often the starting point for tackling those hard problems—if not perfectly, then at least as well as can be expected.
26%
Flag icon
The best time to plant a tree is twenty years ago. The second best time is now. —PROVERB
26%
Flag icon
If assignments get tossed on your desk at unpredictable moments, the optimal strategy for minimizing maximum lateness is still the preemptive version of Earliest Due Date—switching to the job that just came up if it’s due sooner than the one you’re currently doing, and otherwise ignoring it. Similarly, the preemptive version of Shortest Processing Time—compare the time left to finish the current task to the time it would take to complete the new one—is still optimal for minimizing the sum of completion times. In fact, the weighted version of Shortest Processing Time is a pretty good candidate ...more
26%
Flag icon
Programmers don’t talk because they must not be interrupted.… To synchronize with other people (or their representation in telephones, buzzers and doorbells) can only mean interrupting the thought train. Interruptions mean certain bugs. You must not get off the train. —ELLEN ULLMAN
27%
Flag icon
Which makes straightening out your to-do list an item on your to-do list—needing, itself, to get prioritized and scheduled.
27%
Flag icon
Every time you switch tasks, you pay a price, known in computer science as a context switch.
27%
Flag icon
None of this switching back and forth is “real work”—that is, none of it actually advances the state of any of the various programs the computer is switching between. It’s metawork.
27%
Flag icon
Psychologists have shown that for us, the effects of switching tasks can include both delays and errors—at the scale of minutes rather than microseconds. To put that figure in perspective, anyone you interrupt more than a few times an hour is in danger of doing no work at all. Personally, we have found that both programming and writing require keeping in mind the state of the entire system, and thus carry inordinately large context-switching costs.
27%
Flag icon
Brian, for his part, thinks of writing as a kind of blacksmithing, where it takes a while just to heat up the metal before it’s malleable. He finds it somewhat useless to block out anything less than ninety minutes for writing, as nothing much happens in the first half hour except loading a giant block of “Now, where was I?” into his head. Scheduling expert Kirk Pruhs, of the University of Pittsburgh, has had the same experience. “If it’s less than an hour I’ll just do errands instead, because it’ll take me the first thirty-five minutes to really figure out what I want to do and then I might ...more
27%
Flag icon
Gage: Mr. Zuckerberg, do I have your full attention?… Zuckerberg: You have part of my attention—you have the minimum amount. —THE SOCIAL NETWORK
27%
Flag icon
In the 1960s, computer scientists began thinking about how to automate the process of sharing computer resources between different tasks and users. It was an exciting time, recounts Peter Denning, now one of the top experts on computer multitasking, who was then working on his doctorate at MIT.
27%
Flag icon
With one ball in the air, there’s enough spare time while that ball is aloft for the juggler to toss some others upward as well. But what if the juggler takes on one more ball than he can handle? He doesn’t drop that ball; he drops everything.
27%
Flag icon
At the extreme, a program may run just long enough to swap its needed items into memory, before giving way to another program that runs just long enough to overwrite them in turn. This is thrashing: a system running full-tilt and accomplishing nothing at all.
27%
Flag icon
The easiest thing to do is simply to get more memory: enough RAM, for instance, to fit the working sets of all the running programs into memory at once and reduce the time taken by a context switch.
27%
Flag icon
Denning advocated, for instance, that a system should simply refuse to add a program to its workload if it didn’t have enough free memory to hold its working set.
28%
Flag icon
In a thrashing state, you’re making essentially no progress, so even doing tasks in the wrong order is better than doing nothing at all.
28%
Flag icon
Thinking along the same lines, the Linux core team, several years ago, replaced their scheduler with one that was less “smart” about calculating process priorities but more than made up for it by taking less time to calculate them.
28%
Flag icon
These two principles are called responsiveness and throughput: how quickly you can respond to things, and how much you can get done overall.
28%
Flag icon
It’s part of the reason there are people whose job it is to answer the phone: they are responsive so that others may have throughput.
28%
Flag icon
Operating system schedulers typically define a “period” in which every program is guaranteed to run at least a little bit, with the system giving a “slice” of that period to each program. The more programs are running, the smaller those slices become, and the more context switches are happening every period, maintaining responsiveness at the cost of throughput.
28%
Flag icon
With enough programs running, a task’s slice would shrink to the point that the system was spending the entire slice on context switching, before immediately context-switching again to the next task.
28%
Flag icon
So modern operating systems in fact set a minimum length for their slices and will refuse to subdivide the period any more finely.
28%
Flag icon
Methods such as “timeboxing” or “pomodoros,” where you literally set a kitchen timer and commit to doing a single task until it runs out, are one embodiment of this idea.
28%
Flag icon
For your computer, the annoying interruption that it has to check on regularly isn’t email—it’s you.
28%
Flag icon
So the rule that computer operating systems follow when deciding how long they can afford to dedicate themselves to some task is simple: as long as possible without seeming jittery or slow to the user.
28%
Flag icon
You continue to be able to move your mouse around the screen fluidly even when your processor is hauling full-tilt.
28%
Flag icon
The moral is that you should try to stay on a single task as long as possible without decreasing your responsiveness below the minimum acceptable limit. Decide how responsive you need to be—and then, if you want to get things done, be no more responsive than that. If you find yourself doing a lot of context switching because you’re tackling a heterogeneous collection of short tasks, you can also employ another idea from computer science: “interrupt coalescing.” If you have five credit card bills, for instance, don’t pay them as they arrive; take care of them all in one go when the fifth bill ...more
28%
Flag icon
Computers themselves do something like this: they wait until some fixed interval and check everything, instead of context-switching to handle separate, uncoordinated interrupts from their various subcomponents.
28%
Flag icon
At human scale, we get interrupt coalescing for free from the postal system, just as a consequence of their delivery cycle.
28%
Flag icon
What’s more, the twenty-four-hour postal rhythm demands minimal responsiveness from you: it doesn’t make any difference whether you mail your reply five minutes or five hours after receiving a letter. In academia, holding office hours is a way of coalescing interruptions from students.
28%
Flag icon
Whatever their drawbacks, regularly scheduled meetings are one of our best defenses against the spontaneous interruption and the unplanned context switch.
28%
Flag icon
On January 1, 2014, he embarked on “The TeX Tuneup of 2014,” in which he fixed all of the bugs that had been reported in his TeX typesetting software over the previous six years.
28%
Flag icon
Likewise, Knuth has not had an email address since 1990.
28%
Flag icon
He reviews all his postal mail every three months, and all his faxes every six.
28%
Flag icon
Our beeping and buzzing devices have “Do Not Disturb” modes, which we could manually toggle on and off throughout the day, but that is too blunt an instrument. Instead, we might agitate for settings that would provide an explicit option for interrupt coalescing—the same thing at a human timescale that the devices are doing internally. Alert me only once every ten minutes, say; then tell me everything.
29%
Flag icon
If we be, therefore, engaged by arguments to put trust in past experience, and make it the standard of our future judgement, these arguments must be probable only. —DAVID HUME
29%
Flag icon
For somebody who has had such an impact on the history of reasoning under uncertainty, Bayes’s own history remains ironically uncertain.
29%
Flag icon
He had mathematical as well as theological interests, and in 1736 he wrote an impassioned defense of Newton’s newfangled “calculus” in response to an attack by Bishop George Berkeley. This work resulted in his election in 1742 as a Fellow of the Royal Society, to whom he was recommended as “a Gentleman … well skilled in Geometry and all parts of Mathematical and Philosophical Learning.”
29%
Flag icon
After Bayes died in 1761, his friend Richard Price was asked to review his mathematical papers to see if they contained any publishable material.
29%
Flag icon
In other words, we need to first determine how probable it is that we would have drawn the tickets we did if various scenarios were true. This probability—known to modern statisticians as the “likelihood”—gives us the information we need to solve the problem.
1 5 11