More on this book
Community
Kindle Notes & Highlights
by
Katie Hafner
Read between
April 11 - April 16, 2018
That had lead them to the determination that it would take only one hundred fifty lines of code to process a packet through one of the IMPs.
On his deathbed in 1969, Dwight Eisenhower asked a friend about “my scientists” and said they were “one of the few groups that I encountered in Washington who seemed to be there to help the country and not help themselves.”
Each site was responsible for building a custom interface between host and IMP. Since computers of various makes were involved, no single interface would do for all the sites. This entailed a separate hardware-and software-development project at each site and was not something the host teams could throw together overnight.
Above cost, performance, or anything else, reliability was their top priority—design for it, build for it, prepare for the worst, and above all, don’t put your machine in a position to fail. Heart’s mantra built reliability into the IMP in a thousand ways right from the start.
Heart’s design reviews were meant “to help thrash through the hard parts,”
The IMP would be built as a messenger, a sophisticated store-and-forward device, nothing more. Its job would be to carry bits, packets, and messages: To disassemble messages, store packets, check for errors, route the packets, and send acknowledgments for packets arriving error-free; and then to reassemble incoming packets into messages and send them up to the host machines—all in a common language.
system. But instead of working out the essential details in the blueprints, Honeywell had built BBN’s machine without verifying that the BBNdesigned interfaces, as drawn, would work with the 516 base model.
BBN’s full implementation of remote diagnostic tools and debugging capabilities would later become a huge asset. When the network matured, remote control would enable BBN to monitor and maintain the whole system from one operations center, collecting data and diagnosing problems as things changed. Periodically, each IMP would send back to Cambridge a “snapshot of its status,” a set of data about its operating conditions. Changes in the network, minor and major, could be detected. Heart’s group envisioned someday being able to look across the network to know whether any machine was
...more
They were already becoming distracted by something else he disliked—building software tools. Heart feared delay. Over the years he had seen too many programmers captivated by tool building, and he had spent a career holding young engineers back from doing things that might waste money or time. The people in Heart’s division knew that if they asked him for the okay to clock hours writing editors, assemblers, and debuggers, they would meet with stiff resistance, perhaps even a shouting match. So no one ever asked; they just did it, building tools when they thought it was the right thing to do,
...more
But the IMP Guys were driving the machine hard. The flow of packets into and out of the IMP happened faster than the Honeywell designers had anticipated. The
To avoid sounding too declarative, he labeled the note “Request for Comments” and sent it out on April 7, 1969. Titled “Host Software,” the note was distributed to the other sites the way all the first Requests for Comments (RFCs) were distributed: in an envelope with the lick of a stamp. RFC Number 1 described in technical terms the basic “handshake” between two computers—how the most elemental connections would be handled. “Request for Comments,” it turned out, was a perfect choice of titles. It sounded at once solicitous and serious. And it stuck.
“The other definition of protocol is that it’s a handwritten agreement between parties, typically worked out on the back of a lunch bag,” Cerf remarked, “which describes pretty accurately how most of the protocol designs were done.”
Everyone had a vision of the potential for intercomputer communication, but no one had ever sat down to construct protocols that could actually be used. It wasn’t BBN’s job to worry about that problem. The only promise anyone from BBN had made about the planned-for subnetwork of IMPs was that it would move packets back and forth, and make sure they got to their destination. It was entirely up to the host computer to figure out how to communicate with another host computer or what to do with the messages once it received them. This was called the “host-to-host” protocol.
A month after the first IMP was installed at UCLA, IMP Number Two arrived at SRI, right on schedule on October 1, 1969.
Bill Duvall, an SRI researcher, spent about a month writing a clever program for the 940 that essentially fooled it into thinking it was communicating not with another computer but with a “dumb” terminal. A dumb terminal can neither compute nor store information; it serves only to display the most recent set of information sent to it by the computer to which it’s linked. Duvall’s program was a very specific interim solution to the host-to-host communication problem.
Later that day they tried again. This time it worked flawlessly. Crocker, Cerf, and Postel went to Kleinrock’s office to tell him about it so he could come see for himself. Back in the UCLA lab Kline logged on to the SRI machine and was able to execute commands on the 940’s time-sharing system. The SRI computer in Menlo Park responded as if the Sigma-7 in L.A. was a bona fide dumb terminal.
There is no small irony in the fact that the first program used over the network was one that made the distant computer masquerade as a terminal. All that work to get two computers talking to each other and they ended up in the very same master-slave situation the network was supposed to eliminate. Then again, technological advances often begin with attempts to do something familiar.
A network now existed. The first ARPA network map looked like this:
For reasons of economy, Roberts decided that no direct link was needed between UCLA and Utah, or between Santa Barbara and Utah, so that all traffic destined for Utah had to go through the IMP at SRI. That was fine as long as it was up and running. If it crashed, the network would divide and Utah would be cut off until SRI was brought back on-line. As it would turn out, the four-node network that Roberts designed was not a robust web of redundant connections.
By the end of 1969, the Network Working Group still hadn’t come up with a host-to-host protocol. Under duress to show something to ARPA at a meeting with Roberts in December, the group presented a patched-together protocol—Telnet—that allowed for remote log-ins.
When the “glitch-cleaning committee” finished its work a year later, the NWG at last produced a complete protocol. It was called the Network Control Protocol, or NCP.
too. Above all, the esoteric concept on which the entire enterprise turned—packet-switching—worked. The predictions of utter failure were dead wrong.
His insistence on building robust computers—and on maintaining control over the computers he put in the field—had inspired the BBN team to invent a technology: remote maintenance and diagnostics. The BBN team had designed into the IMPs and the network the ability to control these machines from afar.
Loop tests were extremely important; they provided a way of isolating sources of trouble. By a process of elimination, looping one component or another, BBN could determine whether a problem lay with the phone lines, the modems, or an IMP itself.
Among the basic assumptions made by the IMP Guys was that the most effective way of detecting failures was through an active reporting mechanism. They designed their system so that each IMP periodically compiled a report on the status of its local environment—number of packets handled by the IMP, error rates in the links, and the like—and forwarded the report through the network to a centralized network operations center that BBN had set up in Cambridge. The center would then integrate the reports from all the IMPs and build a global picture of the most likely current state of the network.
Dominating one large wall of the control center was a logical map of the network. Constructed of movable magnetic pieces on a metal background, the map was a wiring diagram showing all of the IMPs, host computers, and links—indicated by square, round, and triangular markers. Color codes and pointers indicated the status of each IMP, node, and link. At a glance, operators could tell which lines or IMPs were down and where traffic was being routed around the trouble spots. Besides critical information, an occasional joke or cartoon might be tacked up with a magnet as if the control map were one
...more
There were periods in which the downtime of the IMPs averaged as much as 3 or 4 percent on a monthly basis.
The new scheme would eliminate the need for a host computer between every user and the IMP subnet. All you’d need to make it work would be a dumb terminal connected to an IMP. This would open up a lot of new access points.
BBN launched an accelerated effort to design a terminal controller that could manage the traffic generated by a large number of terminal devices connected directly or through dial-up lines. The new device was called simply a Terminal IMP, or TIP.
Congestion control, one of the troublesome problems demonstrated by Kahn’s experiments, had been attacked and improved. BBN had redesigned the scheme to reserve enough space in the IMP memory buffers for reassembly of incoming packets. A specific amount of reassembly space for each message would be reserved at a destination IMP before the message would be allowed to enter the network. The sending IMP would check, and if told that there was insufficient space available in the destination IMP’s buffers, the RFNM (Request For Next Message) was delayed.
machines. Telnet was conceived in order to overcome simple differences, such as establishing a connection and determining what kind of character set to use. It was TIPs and Telnet together that paved the way for rapid expansion of the network.
File transfers were the next challenge. For a couple of years, a half-dozen researchers had been trying to arrive at an acceptable file-transfer protocol, or FTP. The file-transfer protocol specified the formatting for data files traded over the network. Transferring files was something of a life force for the network. File transfers were the digital equivalent of breathing—data inhale, data exhale, ad infinitum. FTP made it possible to share files between machines. Moving files might seem simple, but the differences between machines made it anything but. FTP was the first application to
...more
Now, on a normal day the channels were virtually empty. In the fall of 1971, the network carried an average of just 675,000 packets per day, barely more than 2 percent of its capacity of 30 million packets a day.
Metcalfe and Cohen used Harvard’s PDP-10 to simulate an aircraft carrier landing and then displayed the image on a graphics terminal at MIT. The graphics were processed at MIT, and the results (the view of the carrier’s flight deck) were shipped back over the ARPA network to the PDP-1 at Harvard, which also displayed them. The experiment demonstrated that a program could be moved around the network at such high speed as to approximate real time. Metcalfe and others wrote up an RFC to announce the triumph and titled it “Historic Moments in Networking.”
The ARPA network was a growing web of links and nodes, and that was it—like a highway system without cars.
For some reason or other, the host he was trying to reach wasn’t functioning, or he miscued the thing. The message came back: “HOST DEAD.” “Oh, my God. I’ve killed it!” he exclaimed. He wouldn’t touch a terminal after that.
Two people had logged in to the University of Utah. One saw that somebody else he knew but had never met was logged in. They were in talk mode, and so he typed, “Where are you?” The other replied, “Well, I’m in Washington,” “Where in Washington?” “At the Hilton.” “Well, I’m at the Hilton, too.” The two turned out to be only a few feet from each other.
Cerf and others had toyed with the idea of setting up Colby’s paranoid to have a “session” with the psychiatrist. Just a few weeks before the ICCC demonstration, PARRY indeed met the Doctor for an unusual conversation over the ARPANET, in an experiment orchestrated at UCLA. It perhaps marked the origin, in the truest sense, of all computer chat. There was no human intervention in the dialogue. PARRY was running at Stanford’s artificial-intelligence lab, the Doctor was running on a machine at BBN, and at UCLA their input and output were cross-connected through the ARPANET, while the operators
...more
The ICCC demonstration did more to establish the viability of packet-switching than anything else before it. As a result, the ARPANET community gained a much larger sense of itself, its technology, and the resources at its disposal. For computer
Bob Kahn had just devoted a year of his life to demonstrating that resource-sharing over a network could really work. But at some point in the course of the event, he turned to a colleague and remarked, “You know, everyone really uses this thing for electronic mail.”
There was a handy bit of software on the network called the resource-sharing executive, or RSEXEC. If you typed in “where so-and-so,” RSEXEC looked for so-and-so by searching the “who” list—a roster of everyone logged on—at each site. You could locate a person on the network this way if he happened to be logged on at that moment. “I
The ARPANET was not intended as a message system. In the minds of its inventors, the network was intended for resource-sharing, period. That very little of its capacity was actually ever used for resource-sharing was a fact soon submersed in the tide of electronic mail. Between 1972 and the early 1980s, e-mail, or network mail as it was referred to, was discovered by thousands of early users. The decade gave rise to many of the enduring features of modern digital culture: Flames, emoticons, the @ sign, debates on free speech and privacy, and a sleepless search for technical improvements and
...more
E-mail was to the ARPANET what the Louisiana Purchase was to the young United States. Things only got better as the network grew and technology converged with the torrential human tendency to talk.
The first of these programs, called MAILBOX, was installed in the early 1960s on the Compatible Time-Sharing System at MIT.
The first electronic-mail delivery engaging two machines was done one day in 1972 by a quiet engineer, Ray Tomlinson at BBN. Sometime earlier, Tomlinson had written a mail program for Tenex, the BBN-grown operating system that, by now, was running on most of the ARPANET’s PDP-10 machines. The mail program was written in two parts: To send messages, you’d use a program called SNDMSG; to receive mail, you’d use the other part called READMAIL. He hadn’t actually intended for the program to be used on the ARPANET.
He looked down at the keyboard he was using, a Model 33 Teletype, which almost everyone else on the Net used, too. In addition to the letters and numerals there were about a dozen punctuation marks. “I got there first, so I got to choose any punctuation I wanted,” Tomlinson said. “I chose the @ sign.” The character also had the advantage of meaning “at” the designated institution. He had no idea he was creating an icon for the wired world.
A frequent flier, Lukasik seldom boarded a plane without lugging aboard his thirty-pound “portable” Texas Instruments terminal with an acoustic coupler, so he could dial in and check his messages from the road. “I really used it to manage ARPA,” Lukasik recalled. “I would be at a meeting, and every hour I would dial up my mail. I encouraged everybody in sight to use it.” He pushed it on all his office directors and they pushed it on others. ARPA managers noticed that e-mail was the easiest way to communicate with the boss, and the fastest way to get his quick approval on things.
Roberts called his program RD, for “read.” Everyone on the ARPANET loved it, and almost everyone came up with variations to RD—a tweak here and a pinch there. A cascade of new mail-handling programs based on the Tenex operating system flowed into the network: NRD, WRD, BANANARD (“banana” was programmer’s slang for “cool” or “hip”), HG, MAILSYS, XMAIL . . . and they kept coming. Pretty soon, the network’s main operators were beginning to sweat. They were like jugglers who had thrown too much up in the air. They needed more uniformity in these programs. Wasn’t anyone paying attention to the
...more
Trouble in one machine could trip a systemwide domino effect. Case in point: the Christmas Day, 1973, lockup. The Harvard IMP developed a hardware fault that had the bizarre effect of reading out all zeros into the routing tables, thereby informing other IMPs across the country that Harvard had just become the shortest route—zero hops—to any destination on the ARPANET. The rush of packets toward Harvard was breathtaking.
The acidic attacks and level of haranguing unique to on-line communication, unacceptably asocial in any other context, was oddly normative on the ARPANET. Flames could start up at any time over anything, and they could last for one message or one hundred.

