Algorithms to Live By: The Computer Science of Human Decisions
Rate it:
Open Preview
44%
Flag icon
Taking the ten-city vacation problem from above, we could start at a “high temperature” by picking our starting itinerary entirely at random, plucking one out of the whole space of possible solutions regardless of price. Then we can start to slowly “cool down” our search by rolling a die whenever we are considering a tweak to the city sequence. Taking a superior variation always makes sense, but we would only take inferior ones when the die shows, say, a 2 or more. After a while, we’d cool it further by only taking a higher-price change if the die shows a 3 or greater—then 4, then 5. ...more
44%
Flag icon
“I couldn’t convince math people that this messy stuff with temperatures, all this analogy-based stuff, was real,” says Kirkpatrick, “because mathematicians are trained to really distrust intuition.”
44%
Flag icon
Rather than keep mum about their secret weapon and become cryptic guru figures themselves, they published their method in a paper in Science, opening it up to others. Over the next few decades, that paper would be cited a whopping thirty-two thousand times. To this day, simulated annealing remains one of the most promising approaches to optimization problems known to the field.
44%
Flag icon
Luria left the dance as soon as he could and set the experiment in motion. After several days of tense, restless waiting, Luria returned to the lab to check on his colonies. Jackpot.
44%
Flag icon
But it was also, at least in part, due to the power of chance. He was in the right place at the right time, where seeing the slot machine triggered a new idea. Tales of discovery often feature a similar moment: Newton’s (possibly apocryphal) apple, Archimedes’ bathtub “Eureka!,” the neglected petri dish that grew Penicillium mold. Indeed, it’s a common enough phenomenon that a word was invented to capture it: in 1754, Horace Walpole coined the term “serendipity,” based on the fairy tale adventures of The Three Princes of Serendip (Serendip being the archaic name of Sri Lanka), who “were always ...more
45%
Flag icon
When it comes to stimulating creativity, a common technique is introducing a random element, such as a word that people have to form associations with.
45%
Flag icon
Being randomly jittered, thrown out of the frame and focused on a larger scale, provides a way to leave what might be locally good and get back to the pursuit of what might be globally optimal.
45%
Flag icon
While this hasn’t yet resulted in any striking discoveries, he now knows a lot about some obscure topics (such as the kind of knife used by the Chilean armed forces) and he feels that some of these have enriched his life. (For example, he’s learned that there is a word in Portuguese for a “vague and constant desire for something that does not and probably cannot exist,” a problem we still can’t solve with a search engine.)
45%
Flag icon
First, from Hill Climbing: even if you’re in the habit of sometimes acting on bad ideas, you should always act on good ones. Second, from the Metropolis Algorithm: your likelihood of following a bad idea should be inversely proportional to how bad an idea it is. Third, from Simulated Annealing: you should front-load randomness, rapidly cooling out of a totally random state, using ever less and less randomness as time goes on, lingering longest as you approach freezing.
45%
Flag icon
Cockcroft himself apparently turned, not unlike his protagonist, to “dicing” for a time in his life, living nomadically with his family on a Mediterranean sailboat, in a kind of Brownian slow motion. At some point, however, his annealing schedule cooled off: he settled down comfortably into a local maximum, on a lake in upstate New York. Now in his eighties, he’s still contentedly there. “Once you got somewhere you were happy,” he told the Guardian, “you’d be stupid to shake it up any further.”
45%
Flag icon
The cell phone began with a boast—Motorola’s Martin Cooper walking down Sixth Avenue on April 3, 1973, as Manhattan pedestrians gawked, calling his rival Joel Engel at AT&T: “Joel, I’m calling you from a cellular phone. A real cellular phone: a handheld, portable, real cellular phone.”
46%
Flag icon
It was October 29, 1969, and Charley Kline at UCLA sent to Bill Duvall at the Stanford Research Institute the first message ever transmitted from one computer to another via the ARPANET. The message was “login”—or would have been, had the receiving machine not crashed after “lo.”
46%
Flag icon
Protocol is how we get on the same page; in fact, the word is rooted in the Greek protokollon, “first glue,” which referred to the outer page attached to a book or manuscript.
46%
Flag icon
But with the Internet, computers became not only the conduit but also the endpoints: the ones doing the talking.
46%
Flag icon
What we now think of as “the Internet” is actually a collection of many protocols, but the chief among them (so much so that it’s often referred to more or less synonymously with the Internet) is what’s known as Transmission Control Protocol, or TCP. It was born from a 1973 talk and a 1974 paper by Vinton “Vint” Cerf and Robert “Bob” Kahn, who laid out a proposal for the language of—as they imagined calling it—an “internetwork.”
46%
Flag icon
Phone calls use what’s called “circuit switching”: the system opens a channel between the sender and the receiver, which supplies constant bandwidth between the parties in both directions as long as the call lasts.
46%
Flag icon
And you can’t afford to dedicate a communications connection to something which is almost never talking, but when it wants to talk it wants immediate access.
46%
Flag icon
And their answer was, “Little boy, go away.” So little boy went away and, with others, developed this technology which ate their lunch.
46%
Flag icon
In a packet-switched network, rather than using a dedicated channel for each connection, senders and receivers atomize their messages into tiny shards known as “packets,” and merge them into the communal flow of data—a bit like postcards moving at the speed of light.
46%
Flag icon
Inspired by algorithms developed in the 1950s for navigating mazes, Baran imagined a design in which every piece of information could independently make its own way to its destination, even as the network was changing dynamically—or being torn to tatters.
46%
Flag icon
In circuit-switched networks, a call fails if any one of its links gets disrupted—which means that reliability goes down exponentially as a network grows larger. In packet switching, on the other hand, the proliferation of paths in a growing network becomes a virtue: there are now that many more ways for data to flow, so the reliability of the network increases exponentially with its size.
46%
Flag icon
The ability to operate agnostically over any number of diverse media would be packet switching’s great virtue. After early networks in the late ’60s and early ’70s, such as the ARPANET, proved the viability of the concept, networks of all types began sprouting across the country, doing packet switching not only over copper phone wires, but over satellites and over radio.
46%
Flag icon
No transmission can be 100 percent reliable. —VINT CERF AND BOB KAHN
46%
Flag icon
Computer scientists know this concept as the “two generals problem.”
46%
Flag icon
Following this chain of logic requires an infinite series of messages, and obviously that won’t do. Communication is one of those delightful things that work only in practice; in theory it’s impossible.
46%
Flag icon
In TCP, a failure generally leads to retransmission rather than death, so it’s considered enough for a session to begin with what’s called a “triple handshake.”
46%
Flag icon
In the postal mail, package delivery can be confirmed via return receipts; online, packet delivery is confirmed by what are called acknowledgment packets, or ACKs.
47%
Flag icon
Behind the scenes of the triple handshake, each machine provides the other with a kind of serial number—and it’s understood that every packet sent after that will increment those serial numbers by one each time, like checks in a checkbook.
47%
Flag icon
(Note that these two numbering schemes are totally independent, and the number that begins each sequence is typically chosen at random.)
47%
Flag icon
Three such redundant ACKs in a row would signal to your machine that 101 isn’t just delayed but hopelessly gone, so it will resend that packet.
47%
Flag icon
All those acknowledgments can actually add up to a considerable amount of traffic. We think of, say, a large file transfer as a one-way operation, but in fact the recipient is sending hundreds of “control messages” back to the sender. A report from the second half of 2014 showed that almost 10% of upstream Internet traffic during peak hours was due to Netflix—which we tend to think of as sending data almost exclusively downstream, to users.
47%
Flag icon
In the human sphere, the anxiety that the message is indeed being received similarly pervades conversation.
47%
Flag icon
No wonder that the most successful twenty-first-century marketing campaign for a wireless carrier featured a network engineer’s quality-control catchphrase, repeated again and again: “Can you hear me now?”
47%
Flag icon
On the other hand, “the Internet was based on the assumption that no network was necessarily reliable, and you had to do end-to-end retransmissions to recover.”
47%
Flag icon
As researchers discovered in the early days of networking, using reliable, robust protocols—with all their ACKs and retransmission of lost packets—to transmit the human voice is overkill. The humans provide the robustness themselves. As Cerf explains, “In the case of voice, if you lose a packet, you just say, ‘Say that again, I missed something.’”
47%
Flag icon
Partly this depends on the nature of the network: we start to worry in a matter of seconds over the phone, days over email, and weeks over postal mail.
47%
Flag icon
In networking, having the parties properly tune their expectations for the timeliness of acknowledgments is crucial to the system functioning correctly.
47%
Flag icon
If both stations simply retransmitted right away to try to get their message across, they’d run the risk of getting stuck in perpetual interference forever.
47%
Flag icon
The breakthrough turned out to be increasing the average delay after every successive failure—specifically, doubling the potential delay before trying to transmit again.
47%
Flag icon
Since the maximum delay length (2, 4, 8, 16…) forms an exponential progression, it’s become known as Exponential Backoff.
47%
Flag icon
Beyond just collision avoidance, Exponential Backoff has become the default way of handling almost all cases of networking failure or unreliability.
48%
Flag icon
This is good for everyone: it prevents a host server that’s down from getting slammed with requests as soon as it comes back online, and it prevents your own machine from wasting too much effort trying to get blood from a stone. But interestingly, it also does not force (or allow) your machine to ever completely give up. Exponential Backoff is also a critical part of networking security, when successive password failures in logging into an account are punished by an exponentially increasing lockout period.
48%
Flag icon
At the same time it also solves another problem: the account’s real owner, no matter how forgetful, is never permanently locked out after some arbitrary cutoff. In human society, we tend to adopt a policy of giving people some finite number of chances in a row, then giving up entirely.
48%
Flag icon
The rate of “retransmission” goes toward zero—yet you never have to completely give up.
48%
Flag icon
Perhaps most importantly, the host would never have to tell her relative that she’d given up on him for good or that he was beyond redemption. It offers a way to have finite patience and infinite mercy. Maybe we don’t have to choose.
48%
Flag icon
In place of violation hearings scheduled a long time into the future, requiring uncertain judgment calls, and occasionally producing huge penalties, HOPE is based on immediate, predefined punishments that begin with just one day in jail and increase after each incident.
48%
Flag icon
The phone system gets full; the mail system gets slow.
48%
Flag icon
Therefore, the sender and receiver must not only communicate but metacommunicate: they need to figure out how fast the data should be sent. Somehow, assorted packet flows—without explicit management or coordination—must both get out of each other’s way and quickly take advantage of any newly available space.
48%
Flag icon
At the heart of TCP congestion control is an algorithm called Additive Increase, Multiplicative Decrease, or AIMD.
48%
Flag icon
Essentially, AIMD takes the form of someone saying, “A little more, a little more, a little more, whoa, too much, cut way back, okay a little more, a little more…” Thus it leads to a characteristic bandwidth shape known as the “TCP sawtooth”—steady upward climbs punctuated by steep drops.