More on this book
Community
Kindle Notes & Highlights
Read between
April 23 - September 15, 2018
The idea of keeping around pieces of information that you refer to frequently is so powerful that it is used in every aspect of computation.
Processors have caches. Hard drives have caches. Operating systems have caches.
Web browsers have...
This highlight has been truncated due to consecutive passage length restrictions.
Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.
As it explains, the goal of cache management is to minimize the number of times you can’t find what you’re looking for in the cache and must go to the slower main memory to find it; these
Mac OS task switching interfaces: when you press Alt + Tab or Command + Tab, you see your applications listed in order from the most recently to the least recently used.
The nearest thing to clairvoyance is to assume that history repeats itself—backward.
all use a version of LRU.
Meanwhile, the lobby of the Moffit Undergraduate Library—the location of the most prominent and accessible shelves—showcases the library’s most recently acquired books. This is instantiating a kind of FIFO cache, privileging the items that were last added to the library, not last read.
The largest of these CDNs is managed by Akamai: content providers pay for their websites to be “Akamaized”
“It’s our belief—and we build the company around the fact—that distance matters.” In our earlier discussion, we noted that
Caching is such an obvious thing because we do it all the time. I mean, the amount of information I get … certain things I have to keep track of right now, a bunch of things I have on my desk, and then other things are filed away, and then eventually filed away into the university archives system where it takes a whole day to get stuff out of it if I wanted. But we use that technique all the time to try to organize our lives.
valet stand.
The most recently accessed files are thus the fastest to find.
LRU tells us that when we add something to our cache we should discard the oldest item—but it doesn’t tell us where we should put the new item. The answer to that question comes from a line of research carried out by computer scientists in the 1970s and ’80s.
The definitive paper on self-organizing lists, published by Daniel Sleator and Robert Tarjan in 1985,
Recognizing the Noguchi Filing System as an instance of the LRU principle in action tells us that it is not merely efficient. It’s actually optimal.
the big pile of papers on your desk, far from being a guilt-inducing fester of chaos, is actually one of the most well-designed and efficient structures available.
What might appear to others to be an unorganized mess is, in fact, a self-organizing mess.
leaving something unsorted was more efficient than taking the time to sort everything; here, however, there’s a very different reason why you don’t need to organize it. You already have.
“They point to the many frustrating failures of memory. However, these criticisms fail to appreciate the task before human memory, which is to try to manage a huge stockpile of memories. In any system responsible for managing a vast data base there must be failures of retrieval. It is just too expensive to maintain access to an unbounded number of items.”
When you make something bigger, it’s inherently slower, right? If you make a city bigger, it takes longer to get from point A to point B. If you make a library bigger, it takes longer to find a book in the library. If you have a stack of papers on your desk that’s bigger, it takes longer to find the paper you’re looking for, right? Caches are actually a solution to that problem.… For example, right now, if you go to buy a processor, what
Brian and Tom, in their thirties, already find themselves more frequently stalling a conversation as, for instance, they wait for the name of someone “on the tip of the tongue” to come to mind.
a typical two-year-old knows two hundred words; a typical adult knows thirty thousand. And when it comes to episodic memory, well, every year adds a third of a million waking minutes to one’s total lived experience.
What’s surprising is not memory’s slowdown, but the fact that the mind can possibly stay afloat and responsive as so much data accumulates.
the fundamental challenge of memory really is one of organization rather than storage, perhaps it should change how we think about the impact of aging on our mental abilities.
“cognitive decline”—lags and retrieval errors—may not be about the search process slowing or deteriorating, but (at least partly) an unavoidable consequence of the amount of information we have to navigate getting bigger and bigger.
The old can mock the young for their speed: “It’s because you don’t know anything yet!”
It’s not that we’re forgetting; it’s that we’re remembering. We’re becoming archives. An understanding of the unavoidable computational
“A lot of what is currently called decline is simply learning.”
The disproportionate occasional lags in information retrieval are a reminder of just how much we benefit the rest of the time by having what we need at the front of our minds.
keeping the most important things closest to hand.
Getting Things Done advocates a policy of immediately doing any task of two minutes or less as soon as it comes to mind. Rival bestseller Eat That Frog! advises beginning with the most difficult task and moving toward easier and easier things. The Now Habit suggests first scheduling one’s social engagements and leisure time and then filling the gaps with work—
William James, the “father of American psychology,” asserts that “there’s nothing so fatiguing as the eternal hanging on of an uncompleted task,” but Frank Partnoy, in Wait, makes the case for deliberately not doing things right away.
This practice would be built upon by Taylor’s colleague Henry Gantt, who in the 1910s developed the Gantt charts
A century later, Gantt charts still adorn the walls and screens of project managers at firms like Amazon, IKEA, and SpaceX.
If you have only a single machine, and you’re going to do all of your tasks, then any ordering of the tasks will take you the same amount of time.
If you’re concerned with minimizing maximum lateness, then the best strategy is to start with the task due soonest and work your way toward the task due last.
precisely, it is optimal assuming that you’re only interested in one metric in particular: reducing your maximum lateness. If that’s not your goal, however, then another strategy might be more applicable.
Earliest Due Date is optimal for reducing maximum lateness, which means it will minimize the rottenness of the single most rotten thing you’ll have to eat; that may not be the most appetizing metric to eat by.
Maybe instead we want to minimize the number of foods that spoil. Here a strategy called Moore’s Algorithm gives us our best plan.
We’ve noted that in single-machine scheduling, nothing we do can change how long it will take us to finish all of our tasks—but if each task, for instance, represents a waiting client, then there is a way to take up as little of their collective time as possible.
“sum of completion times.”
Minimizing the sum of completion times leads to a very simple optimal algorithm called Shortest Processing Time: always do the quickest task you can.
Its sum-of-completion-times metric can be expressed another way: it’s like focusing above all on reducing the length of your to-do list.
importance-per-unit-time (call it “density” if you like, to continue the weight metaphor)
Animals, seeking to maximize the rate at which they accumulate energy from food, should pursue foods in order of the ratio of their caloric energy to the time required to get and eat them—and indeed appear to do so.
In debt-reduction circles, this approach is known as the “debt snowball.”
“a man with one watch knows what time it is; a man with two watches is never sure.”