More on this book
Community
Kindle Notes & Highlights
And once a transmission is in progress, if it starts to falter again that’s likely to be because some new connection is competing for the network. The most conservative assessment of that situation—namely, assuming you were the only person using the network and now there’s a second person taking half the resources—also leads to cutting back by half. Conservatism here is essential: a network can stabilize only if its users pull back at least as fast as the rate at which it is being overloaded.
It turns out the ants’ solution is similar, too: a feedback cycle where successful foragers prompt more to leave the nest, while unsuccessful returnees result in a diminishment of foraging activity.
Squirrels and pigeons going after human food scraps will creep forward a step at a time, occasionally leap back, then steadily creep forward again. And it may be that human communications themselves mirror the very protocols that transmit them: every text message or email reply encourages yet another, while every unreturned message stanches the flow.
The satirical “Peter Principle,” articulated in the 1960s by education professor Laurence J. Peter, states that “every employee tends to rise to his level of incompetence.”
Some fifty years before Peter’s formulation, Spanish philosopher José Ortega y Gasset in 1910 voiced the same sentiment. “Every public servant should be demoted to the immediately lower rank,” he wrote, “because they were advanced until they became incompetent.”
Under AIMD, every connection that isn’t dropping the ball is accelerated until it is—and then it’s cut in half, and immediately begins accelerating again.
As Laurence J. Peter saw it, the insidious Peter Principle arises in corporations because of “the first commandment of hierarchical life: the hierarchy must be preserved.”
Under an AIMD system, no one is long anxious about being overtaxed, nor long resentful about a missed promotion; both are temporary and frequent correctives, and the system hovers near its equilibrium despite everything changing all the time.
As linguist Victor Yngve would write in 1970, “In fact, both the person who has the turn and his partner are simultaneously engaged in both speaking and listening. This is because of the existence of what I call the back channel, over which the person who has the turn receives short messages such as ‘yes’ and ‘uh-huh’ without relinquishing the turn.”
As the saying goes, “the most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny.’”
A buffer is essentially a queue whose function is to smooth out bursts.
Putting the customers in a queue instead ensures that the average throughput of the store approaches its maximum throughput.
Because he was uploading a file, his computer was sending the modem as many upstream packets as it could handle. And the modem was pretending to handle a lot more than it actually could, turning none away while building up a massive queue. So when Gettys tried to download something at the same time—to visit a webpage or check email—his ACK packets would get stuck behind the upload, having to wait in line at the modem to leave the house. Because his ACKs then took forever to return to the web and email servers, the servers would in turn throttle their own downstream connection speeds to a
...more
When a networking buffer fills up, what typically happens is called Tail Drop: an unceremonious way of saying that every packet arriving after that point is simply rejected, and effectively deleted.
Yet it’s precisely such “packet drops” that lead a computer to notice that one of its packets hasn’t been acknowledged, prompting AIMD to start halving the bandwidth. Dropped packets are the Internet’s primary feedback mechanism. A buffer that’s too large—a restaurant taking every order no matter how short-staffed the kitchen, a modem taking every packet that comes in regardless of how long it’ll take to send them on—prevents this moderation from happening as it should. Fundamentally, buffers use delay—known in networking as “latency”—in order to maximize throughput. That is, they cause
...more
And the bigger the buffer is, the further behind you’ll get before you start signaling for help. One of the fundamental principles of buffers, be they for packets or patrons, is that they only work correctly when they are routinely zeroed out.
But at some point, as economies of scale in the computer industry radically lowered the cost of memory, modem manufacturers started giving their machines gigabytes of RAM because that was effectively the smallest amount of RAM they could get.
But the problem isn’t that we’re always connected; we’re not. The problem is that we’re always buffered.
Indeed, over the past fifteen years, the move from circuit switching to packet switching has played itself out across society. We used to request dedicated circuits with others; now we send them packets and wait expectantly for ACKs. We used to reject; now we defer. The much-lamented “lack of idleness” one reads about is, perversely, the primary feature of buffers: to bring average throughput up to peak throughput.
Which seems an opportune moment to say our good-byes and release our bandwidth to the commons, to the myriad flows making their additive increase.
Schoolchildren are taught to conceive of literary plots as belonging to one of several categories: man vs. nature, man vs. self, man vs. man, man vs. society. Thus far in this book we have considered primarily cases in the first two categories—that is to say, computer science has thus far been our guide to problems created by the fundamental structure of the world, and by our limited capacities for processing information. Optimal stopping problems spring from the irreversibility and irrevocability of time; the explore/exploit dilemma, from time’s limited supply. Relaxation and randomization
...more
In the past couple of decades, cross-pollination between game theory and computer science has produced the field of algorithmic game theory—which has already begun to have an impact on the twenty-first.
Arguably the most influential economist of the twentieth century, John Maynard Keynes, once said that “successful investing is anticipating the anticipations of others.”
In this way, the value of a stock isn’t what people think it’s worth but what people think people think it’s worth.
As Alan Turing proved in 1936, a computer program can never tell you for sure whether another program might end up calculating forever without end—except by simulating the operation of that program and thus potentially going off the deep end itself.
games seek to identify a so-called equilibrium: that is, a set of strategies that both players can follow such that neither player would want to change their own play, given the play of their opponent.
In one of the seminal results in game theory, the mathematician John Nash proved in 1951 that every two-player game has at least one equilibrium.
Put broadly, the object of study in mathematics is truth; the object of study in computer science is complexity.
And so, the original field of game theory begat algorithmic game theory—that is, the study of theoretically ideal strategies for games became the study of how machines (and people) come up with strategies for games.
By the end of the twentieth century, determining whether a game has more than one equilibrium, or an equilibrium that gives a player a certain payoff, or an equilibrium that involves taking a particular action, had all been proved to be intractable problems. Then, from 2005 to 2008, Papadimitriou and his colleagues proved that simply finding Nash equilibria is intractable as well.
A dominant strategy avoids recursion altogether, by being the best response to all of your opponent’s possible strategies—so you don’t even need to trouble yourself getting inside their head at all.
This has emerged as one of the major insights of traditional game theory: the equilibrium for a set of players, all acting rationally in their own interest, may not be the outcome that is actually best for those players.
The price of anarchy measures the gap between cooperation (a centrally designed or coordinated solution) and competition (where each participant is independently trying to maximize the outcome for themselves).
It’s a bit like the famous line by James Branch Cabell: “The optimist proclaims that we live in the best of all possible worlds; and the pessimist fears this is true.”
All employees want, in theory, to take as much vacation as possible. But they also all want to take just slightly less vacation than each other, to be perceived as more loyal, more committed, and more dedicated (hence more promotion-worthy). Everyone looks to the others for a baseline, and will take just slightly less than that. The Nash equilibrium of this game is zero.
While game theory asks what behavior will emerge given a set of rules, mechanism design (sometimes called “reverse game theory”) works in the other direction, asking: what rules will give us the behavior we want to see?
On the other hand, even Wall Street, ruthless cutthroat capitalists trading by the microsecond in the “city that never sleeps,” comes to a cease-fire every day at 4:00 p.m. sharp, so that brokers can sleep at predictable hours every night without getting too badly ambushed by competitors pushing toward a sleepless equilibrium. In this sense, the stock market is more a sport than a war. Scaling up this logic results in a potent argument for the role of government.
Indeed, religion itself provides a very direct way of modifying the structure of games of this kind.
And adding divine force to injunctions against other kinds of antisocial behavior, such as murder, adultery, and theft, is likewise a way to solve some of the game-theoretic problems of living in a social group.
But by reducing the number of options that people have, behavioral constraints of the kind imposed by religion don’t just make certain kinds of decisions less computationally challenging—they can also yield better outcomes.
In this sense, both protagonists are acting irrationally. On the other hand, their actions are good for their society: we all want to live in a world in which pickpocketing doesn’t pay and in which businesses that sell poor-quality products get a bad reputation.
But all of us are better off living in a society in which such defiant stands are common.
Emotion, for the bitter, retaliatory consumer and for the convenience-store hero alike, is our own species taking over the controls for a minute.
Precisely because feelings are involuntary, they enable contracts that need no outside enforcement.
(Lest you think that civilized modern humans have legal contracts and rule of law instead of retribution, recall that it’s often more work and suffering to sue or prosecute someone than the victim could ever hope to recover in material terms. Lawsuits are the means for self-destructive retaliation in a developed society, not the substitute.)
Part of the reason why it’s a good idea to pay attention to the behavior of others is that in doing so, you get to add their information about the world to your own.
An interesting aspect of the 2007–2009 mortgage crisis is that everybody involved seemed to feel like they were unfairly punished for simply doing what they were supposed to.
Here game theory offers a sobering perspective: catastrophes like this can happen even when no one’s at fault.
One of the simplest auction formats has each participant write down their bid in secret, and the one whose bid is highest wins the item for whatever price they wrote down. This is known as a “sealed-bid first-price auction,” and from an algorithmic game theory perspective there’s a big problem with it—actually, several.
This problem, in turn, leads to another one, which is that in order to bid properly—that is, in order not to overpay—you need to predict the true valuation of the other players in the auction and “shade” your bid accordingly.