More on this book
Community
Kindle Notes & Highlights
by
Katie Hafner
Read between
April 11 - April 16, 2018
Taylor had been the young director of the office within the Defense Department’s Advanced Research Projects Agency overseeing computer research, and he was the one who had started the ARPANET. The project had embodied the most peaceful intentions—to link computers at scientific laboratories across the country so that researchers might share computer resources.
The equipment was state of the art, but having a room cluttered with assorted computer terminals was like having a den cluttered with several television sets, each dedicated to a different channel. “It became obvious,” Taylor said many years later, “that we ought to find a way to connect all these different machines.”
Eisenhower hadn’t wanted a seasoned military expert heading the Pentagon; he was one himself. The president distrusted the military-industrial complex and the fiefdoms of the armed services. His attitude toward them sometimes bordered on contempt.
Eisenhower was the first president to host a White House dinner specifically to single out the scientific and engineering communities as guests of honor,
Then in May 1961 a computer, and a very large one at that, demanded his attention. A program in military command and control issues had been started at ARPA using emergency DOD funds. For the work, the Air Force had purchased the huge, expensive Q-32, a behemoth of a machine that was to act as the backup for the nation’s air defense early-warning system. The machine had been installed at a facility in Santa Monica, California, at one of the Air Force’s major contractors, System Development Corporation (SDC), where it was supposed to be used for operator training, and as a software development
...more
Time-sharing was, as the term suggests, a new method of giving many users interactive access to computers from individual terminals. The terminals allowed them to interact directly with the mainframe computer. The revolutionary aspect of time-sharing was that it eliminated much of the tedious waiting that characterized batch-process computing. Time-sharing gave users terminals that allowed them to interact directly with the computer and obtain their results immediately. “We really believed that it was a better way to operate,”
What time-sharing could not do was eliminate the necessity of coordinating competing demands on the machine by different users. By its nature, time-sharing encouraged users to work as if they had the entire machine at their command, when in fact they had only a fraction of the total computing power. Distribution of costs among a number of users meant that the more users the better. Of course, too many users bogged down the machine, since a high percentage of the machine’s resources were allocated to coordinating the commands of multiple users.
Licklider was far more than just a computer enthusiast, however. For several years, he had been touting a radical and visionary notion: that computers weren’t just adding machines. Computers had the potential to act as extensions of the whole human being, as tools that could amplify the range of human intelligence and expand the reach of our analytical powers.
pound the table and say it’s not fair, and he’d say, ‘It doesn’t matter who gets the credit; it matters that it gets done.’”
In fact, SAGE was one of the first fully operational, real-time interactive computer systems. Operators communicated with the computer through displays, keyboards, switches, and light guns. Users could request information from the computer and receive an answer within a few seconds. New information continuously flowed directly into the computer’s memory through telephone lines to the users, making it immediately available to the operators.
The idea on which Lick’s worldview pivoted was that technological progress would save humanity. The political process was a favorite example of his. In a McLuhanesque view of the power of electronic media, Lick saw a future in which, thanks in large part to the reach of computers, most citizens would be “informed about, and interested in, and involved in, the process of government.” He imagined what he called “home computer consoles” and television sets linked together in a massive network. “The political process,” he wrote, “would essentially be a giant teleconference, and a campaign would be
...more
Building a network as an end in itself wasn’t Taylor’s principal objective. He was trying to solve a problem he had seen grow worse with each round of funding. Researchers were duplicating, and isolating, costly computing resources. Not only were the scientists at each site engaging in more, and more diverse, computer research, but their demands for computer resources were growing faster than Taylor’s budget. Every new project required setting up a new and costly computing operation.
And none of the resources or results was easily shared. If the scientists doing graphics in Salt Lake City wanted to use the programs developed by the people at Lincoln Lab, they had to fly to Boston.
In those days, software programs were one-of-a-kind, like original works of art, and not easily transferred from one machine to another.
There was almost no way to bring radical new technology into the Bell System to coexist with the old. It wasn’t until 1968, when the FCC permitted the use of the Carterfone—a device for connecting private two-way radios with the telephone system—that AT&T’s unrelenting grip on the nation’s telecommunications system loosened. Not
In the early 1960s, before Larry Roberts had even set to work creating a new computer network, two other researchers, Paul Baran and Donald Davies—completely unknown to each other and working continents apart toward different goals—arrived at virtually the same revolutionary idea for a new kind of communications network. The realization of their concepts came to be known as packet-switching.
Soon after Baran had arrived at RAND, he developed an interest in the survivability of communications systems under nuclear attack. He was motivated primarily by the hovering tensions of the cold war, not the engineering challenges involved.
They discussed the brain, its neural net structures, and what happens when some portion is diseased, particularly how brain functions can sometimes recover by sidestepping a dysfunctional region. “Well, gee, you know,” Baran remembered thinking, “the brain seems to have some of the properties that one would need for real stability.” It struck him as significant that brain functions didn’t rely on a single, unique, dedicated set of cells. This is why damaged cells can be bypassed as neural nets re-create themselves over new pathways in the brain.
Baran’s idea constituted a third approach to network design. He called his a distributed network. Avoid having a central communications switch, he said, and build a network composed of many nodes, each redundantly connected to its neighbor. His original diagram showed a network of interconnected nodes resembling a distorted lattice, or fish net.
He concluded that a redundancy level as low as 3 or 4—each node connecting to three or four other nodes—would provide an exceptionally high level of ruggedness and reliability.
Baran’s second big idea was still more revolutionary: Fracture the messages too. By dividing each message into parts, you could flood the network with what he called “message blocks,” all racing over different paths to their destination. Upon their arrival, a receiving computer would reassemble the message bits into readable form.
What Baran envisioned was a network of unmanned switches, or nodes—stand-alone computers, essentially—that routed messages by employing what he called a “self-learning policy at each node, without need for a central, and possibly vulnerable, control point.” He came up with a scheme for sending information back and forth that he called “hot potato routing,” which was essentially a rapid store-and-forward system working almost instantaneously, in contrast with the old post-it-and-forward teletype procedure.
As the term “hot potato” suggests, no sooner did a message block enter a node than it was tossed out the door again as quickly as possible. If the best outgoing path was busy—or blown to bits—the message block was automatically sent out over the next best route. If that link was busy or gone, it followed the next best route, and so forth. And if the choices ran out, the data could even be sent back to the node from which it originated.
Vulnerability notwithstanding, the idea of slicing data into message blocks and sending each block out to find its own way through a matrix of phone lines struck AT&T staff members as totally preposterous. Their world was a place where communications were sent as a stream of signals down a pipe. Sending data in small parcels seemed just about as logical as sending oil down a pipeline one cupful at a time.
“It took ninety-four separate speakers to describe the entire system, since no single individual seemed to know more than a part of the system,” Baran said. “Probably their greatest disappointment was that after all this, they said, ‘Now do you see why it can’t work?’And I said, ‘No.’”
In London in the autumn of 1965, just after Baran halted work on his project, Donald Watts Davies, a forty-one-year-old physicist at the British National Physical Laboratory (NPL), wrote the first of several personal notes expounding on some ideas he was playing with for a new computer network much like Baran’s.
he gave a public lecture in London describing the notion of sending short blocks of data—which he called “packets”—through a digital store-and-forward network.
There was just one major difference in their approaches. The motivation that led Davies to conceive of a packet-switching network had nothing to do with the military concerns that had driven Baran. Davies simply wanted to create a new public communications network.
to exploit the technical strengths he saw in digital computers and switches, to bring about highly responsive, highly interactive computing over long distances.
Davies was concerned that circuit-switched networks were poorly matched to the requirements of interacting computers. The irregular, bursty characteristics of computer-generated data traffic did not fit well with the uniform channel capacity of the telephone system. Matching the n...
This highlight has been truncated due to consecutive passage length restrictions.
Time-sharing systems had already solved the nagging problem of slow turnaround time in batch processing by giving each user a slice of computer processing time. Several people could be running jobs at once without noticing any significant delay in their work. Analogously, in a digital communications network, a computer could slice messages into small pieces, or packets, pour those into the electronic pipeline, and allow users to share the network’s total capacity.
The way Clark explained it, the solution was obvious: a subnetwork with small, identical nodes, all interconnected. The idea solved several problems. It placed far fewer demands on all the host computers and correspondingly fewer demands on the people in charge of them. The smaller computers composing this inner network would all speak the same language, of course, and they, not the host computers, would be in charge of all the routing. Furthermore, the host computers would have to adjust their language just once—to speak to the subnet. Not only did Clark’s idea make good sense technically, it
...more
Perhaps it was Clark’s antipathy toward time-sharing that enabled him to think of this. By assigning the task of routing to the host computers, Roberts and others were essentially adding another time-sharing function. Clark’s idea was to spare the hosts that extra burden and build a network of identical, nonshared computers dedicated to routing.
The ARPA network wasn’t intended as a real-time system, not in the same sense of the word that true real-timers understand it. (Anything that takes more than 10 to 20 milliseconds, the point at which delays become humanly perceptible, is not considered real-time.) Strictly speaking, the ARPA network was to be a store-and-forward system. But data would zip in and out of the nodes so quickly, and the response time from a human perspective would be so rapid, that it qualified as a real-time problem.
When he returned to Washington, Roberts wrote a memorandum describing Clark’s idea and distributed it to Kleinrock and others. He called the intermediate computers that would control the network “interface message processors,” or IMPs, which he pronounced “imps.” They were to perform the functions of interconnecting the network, sending and receiving data, checking for errors, retransmitting in the event of errors, routing data, and verifying that messages arrived at their intended destinations.
Another paper was presented by Roger Scantlebury. It came from Donald Davies’ team at the National Physical Laboratory and discussed the work going on in England. His paper presented a detailed design study for a packet-switched network. It was the first Roberts had heard of it.
Roberts was designing this experimental network not with survivable communications as his main—or even secondary—concern. Nuclear war scenarios, and command and control issues, weren’t high on Roberts’s agenda. But Baran’s insights into data communications intrigued him nonetheless, and in early 1968 he met with Baran. After that, Baran became something of an informal consultant to the group Roberts assembled to design the network.
Roberts thought the network should start out with four sites—UCLA, SRI, the University of Utah, and the University of California at Santa Barbara—and eventually grow to around nineteen.
Stanford Research Institute (later it severed its ties to Stanford and became just SRI) had been chosen as one of the first sites because Doug Engelbart, a scientist of extraordinary vision, worked there.
downtime. IMPs could not afford to be dependent on a local host computer or host-site personnel; they should be able to continue operating and routing network traffic whether or not a host was running. The subnetwork also had to continue functioning when individual IMPs were down for service. This idea that maintaining reliability should be incumbent on the subnetwork, not the hosts, was a key principle. Roberts and others believed the IMPs should also attend to such tasks as route selection and acknowledgment of receipt.
Not long after the computer arrived, Ken Olsen stopped by to see the Royal-McBee machine and to tell BBN about the computer he was building at his new company, Digital Equipment. Olsen wanted to lend Beranek a prototype of the machine, which he called the PDP-1,
The time-sharing demonstration was a success, and BBN decided to start a time-sharing service in the Boston area by placing terminals throughout the city. Soon, however, General Electric mounted a similar effort and quickly stole the bulk of BBN’s time-sharing business.
The idea was adopted as a research theme by BBN’s education group, which Feurzeig ran, and the language came to be called LOGO.
Heart liked working with small, tightly knit groups composed of very bright people. He believed that individual productivity and talent varied not by factors of two or three, but by factors of ten or a hundred. Because Heart had a knack for spotting engineers who could make things happen, the groups he had supervised at Lincoln tended to be unusually productive.
The 516 also helped to settle Heart’s fear that inquisitive graduate students might bring down the network with their tinkering. He could rest much easier knowing the IMPs would be housed in a box built to withstand a war.
Central to the design of the network was the idea that the subnet of IMPs should operate invisibly. So, for instance, if someone seated at a host computer at UCLA wanted to log on to a computer at the University of Utah, the connection should appear to be direct. The user should not have to be bothered with the fact that a subnet even existed.
To be effective, a data network would have to send packets reliably, despite errors unavoidably introduced during transmission over ordinary phone lines. Human ears tolerate telephone line noise, which is often barely audible, but computers receiving data are nit-pickers about noise, and the smallest hiss or pop can destroy small bits of data or instruction. The IMPs would have to be able to compensate.
The team Heart had assembled knew how to make things that worked, if not perfectly, then well enough. They were engineers and pragmatists. All of their lives they had built things, connected wires, and made concepts real. Their ethos was utilitarian. At its core, all engineering comes down to making tradeoffs between the perfect and the workable.
The amount of memory in a desktop computer circa mid-1990s, if it consisted of ferrite cores, would take up an area roughly the size of a football field.
“If the program went wild,” Heart explained, a small timer in the machine would run down to zero (but a healthy program would continually reset it). If the timer reached zero and went off, the IMP was assumed to have a trashed program. It would then throw a relay to turn on the paper tape reader, give it a while to warm up, and then reload the program.

