More on this book
Community
Kindle Notes & Highlights
Taking the ten-city vacation problem from above, we could start at a “high temperature” by picking our starting itinerary entirely at random, plucking one out of the whole space of possible solutions regardless of price. Then we can start to slowly “cool down” our search by rolling a die whenever we are considering a tweak to the city sequence. Taking a superior variation always makes sense, but we would only take inferior ones when the die shows, say, a 2 or more. After a while, we’d cool it further by only taking a higher-price change if the die shows a 3 or greater—then 4, then 5.
...more
“I couldn’t convince math people that this messy stuff with temperatures, all this analogy-based stuff, was real,” says Kirkpatrick, “because mathematicians are trained to really distrust intuition.”
Rather than keep mum about their secret weapon and become cryptic guru figures themselves, they published their method in a paper in Science, opening it up to others. Over the next few decades, that paper would be cited a whopping thirty-two thousand times. To this day, simulated annealing remains one of the most promising approaches to optimization problems known to the field.
Luria left the dance as soon as he could and set the experiment in motion. After several days of tense, restless waiting, Luria returned to the lab to check on his colonies. Jackpot.
But it was also, at least in part, due to the power of chance. He was in the right place at the right time, where seeing the slot machine triggered a new idea. Tales of discovery often feature a similar moment: Newton’s (possibly apocryphal) apple, Archimedes’ bathtub “Eureka!,” the neglected petri dish that grew Penicillium mold. Indeed, it’s a common enough phenomenon that a word was invented to capture it: in 1754, Horace Walpole coined the term “serendipity,” based on the fairy tale adventures of The Three Princes of Serendip (Serendip being the archaic name of Sri Lanka), who “were always
...more
When it comes to stimulating creativity, a common technique is introducing a random element, such as a word that people have to form associations with.
Being randomly jittered, thrown out of the frame and focused on a larger scale, provides a way to leave what might be locally good and get back to the pursuit of what might be globally optimal.
While this hasn’t yet resulted in any striking discoveries, he now knows a lot about some obscure topics (such as the kind of knife used by the Chilean armed forces) and he feels that some of these have enriched his life. (For example, he’s learned that there is a word in Portuguese for a “vague and constant desire for something that does not and probably cannot exist,” a problem we still can’t solve with a search engine.)
First, from Hill Climbing: even if you’re in the habit of sometimes acting on bad ideas, you should always act on good ones. Second, from the Metropolis Algorithm: your likelihood of following a bad idea should be inversely proportional to how bad an idea it is. Third, from Simulated Annealing: you should front-load randomness, rapidly cooling out of a totally random state, using ever less and less randomness as time goes on, lingering longest as you approach freezing.
Cockcroft himself apparently turned, not unlike his protagonist, to “dicing” for a time in his life, living nomadically with his family on a Mediterranean sailboat, in a kind of Brownian slow motion. At some point, however, his annealing schedule cooled off: he settled down comfortably into a local maximum, on a lake in upstate New York. Now in his eighties, he’s still contentedly there. “Once you got somewhere you were happy,” he told the Guardian, “you’d be stupid to shake it up any further.”
The cell phone began with a boast—Motorola’s Martin Cooper walking down Sixth Avenue on April 3, 1973, as Manhattan pedestrians gawked, calling his rival Joel Engel at AT&T: “Joel, I’m calling you from a cellular phone. A real cellular phone: a handheld, portable, real cellular phone.”
It was October 29, 1969, and Charley Kline at UCLA sent to Bill Duvall at the Stanford Research Institute the first message ever transmitted from one computer to another via the ARPANET. The message was “login”—or would have been, had the receiving machine not crashed after “lo.”
Protocol is how we get on the same page; in fact, the word is rooted in the Greek protokollon, “first glue,” which referred to the outer page attached to a book or manuscript.
But with the Internet, computers became not only the conduit but also the endpoints: the ones doing the talking.
What we now think of as “the Internet” is actually a collection of many protocols, but the chief among them (so much so that it’s often referred to more or less synonymously with the Internet) is what’s known as Transmission Control Protocol, or TCP. It was born from a 1973 talk and a 1974 paper by Vinton “Vint” Cerf and Robert “Bob” Kahn, who laid out a proposal for the language of—as they imagined calling it—an “internetwork.”
Phone calls use what’s called “circuit switching”: the system opens a channel between the sender and the receiver, which supplies constant bandwidth between the parties in both directions as long as the call lasts.
And you can’t afford to dedicate a communications connection to something which is almost never talking, but when it wants to talk it wants immediate access.
And their answer was, “Little boy, go away.” So little boy went away and, with others, developed this technology which ate their lunch.
In a packet-switched network, rather than using a dedicated channel for each connection, senders and receivers atomize their messages into tiny shards known as “packets,” and merge them into the communal flow of data—a bit like postcards moving at the speed of light.
Inspired by algorithms developed in the 1950s for navigating mazes, Baran imagined a design in which every piece of information could independently make its own way to its destination, even as the network was changing dynamically—or being torn to tatters.
In circuit-switched networks, a call fails if any one of its links gets disrupted—which means that reliability goes down exponentially as a network grows larger. In packet switching, on the other hand, the proliferation of paths in a growing network becomes a virtue: there are now that many more ways for data to flow, so the reliability of the network increases exponentially with its size.
The ability to operate agnostically over any number of diverse media would be packet switching’s great virtue. After early networks in the late ’60s and early ’70s, such as the ARPANET, proved the viability of the concept, networks of all types began sprouting across the country, doing packet switching not only over copper phone wires, but over satellites and over radio.
No transmission can be 100 percent reliable. —VINT CERF AND BOB KAHN
Computer scientists know this concept as the “two generals problem.”
Following this chain of logic requires an infinite series of messages, and obviously that won’t do. Communication is one of those delightful things that work only in practice; in theory it’s impossible.
In TCP, a failure generally leads to retransmission rather than death, so it’s considered enough for a session to begin with what’s called a “triple handshake.”
In the postal mail, package delivery can be confirmed via return receipts; online, packet delivery is confirmed by what are called acknowledgment packets, or ACKs.
Behind the scenes of the triple handshake, each machine provides the other with a kind of serial number—and it’s understood that every packet sent after that will increment those serial numbers by one each time, like checks in a checkbook.
(Note that these two numbering schemes are totally independent, and the number that begins each sequence is typically chosen at random.)
Three such redundant ACKs in a row would signal to your machine that 101 isn’t just delayed but hopelessly gone, so it will resend that packet.
All those acknowledgments can actually add up to a considerable amount of traffic. We think of, say, a large file transfer as a one-way operation, but in fact the recipient is sending hundreds of “control messages” back to the sender. A report from the second half of 2014 showed that almost 10% of upstream Internet traffic during peak hours was due to Netflix—which we tend to think of as sending data almost exclusively downstream, to users.
In the human sphere, the anxiety that the message is indeed being received similarly pervades conversation.
No wonder that the most successful twenty-first-century marketing campaign for a wireless carrier featured a network engineer’s quality-control catchphrase, repeated again and again: “Can you hear me now?”
On the other hand, “the Internet was based on the assumption that no network was necessarily reliable, and you had to do end-to-end retransmissions to recover.”
As researchers discovered in the early days of networking, using reliable, robust protocols—with all their ACKs and retransmission of lost packets—to transmit the human voice is overkill. The humans provide the robustness themselves. As Cerf explains, “In the case of voice, if you lose a packet, you just say, ‘Say that again, I missed something.’”
Partly this depends on the nature of the network: we start to worry in a matter of seconds over the phone, days over email, and weeks over postal mail.
In networking, having the parties properly tune their expectations for the timeliness of acknowledgments is crucial to the system functioning correctly.
If both stations simply retransmitted right away to try to get their message across, they’d run the risk of getting stuck in perpetual interference forever.
The breakthrough turned out to be increasing the average delay after every successive failure—specifically, doubling the potential delay before trying to transmit again.
Since the maximum delay length (2, 4, 8, 16…) forms an exponential progression, it’s become known as Exponential Backoff.
Beyond just collision avoidance, Exponential Backoff has become the default way of handling almost all cases of networking failure or unreliability.
This is good for everyone: it prevents a host server that’s down from getting slammed with requests as soon as it comes back online, and it prevents your own machine from wasting too much effort trying to get blood from a stone. But interestingly, it also does not force (or allow) your machine to ever completely give up. Exponential Backoff is also a critical part of networking security, when successive password failures in logging into an account are punished by an exponentially increasing lockout period.
At the same time it also solves another problem: the account’s real owner, no matter how forgetful, is never permanently locked out after some arbitrary cutoff. In human society, we tend to adopt a policy of giving people some finite number of chances in a row, then giving up entirely.
The rate of “retransmission” goes toward zero—yet you never have to completely give up.
Perhaps most importantly, the host would never have to tell her relative that she’d given up on him for good or that he was beyond redemption. It offers a way to have finite patience and infinite mercy. Maybe we don’t have to choose.
In place of violation hearings scheduled a long time into the future, requiring uncertain judgment calls, and occasionally producing huge penalties, HOPE is based on immediate, predefined punishments that begin with just one day in jail and increase after each incident.
The phone system gets full; the mail system gets slow.
Therefore, the sender and receiver must not only communicate but metacommunicate: they need to figure out how fast the data should be sent. Somehow, assorted packet flows—without explicit management or coordination—must both get out of each other’s way and quickly take advantage of any newly available space.
At the heart of TCP congestion control is an algorithm called Additive Increase, Multiplicative Decrease, or AIMD.
Essentially, AIMD takes the form of someone saying, “A little more, a little more, a little more, whoa, too much, cut way back, okay a little more, a little more…” Thus it leads to a characteristic bandwidth shape known as the “TCP sawtooth”—steady upward climbs punctuated by steep drops.