More on this book
Community
Kindle Notes & Highlights
by
Katie Hafner
Read between
April 11 - April 16, 2018
The FINGER controversy, a debate over privacy on the Net, occurred in early 1979 and involved some of the worst flaming in the MsgGroup’s experience. The fight was over the introduction, at Carnegie-Mellon, of an electronic widget that allowed users to peek into the on-line habits of other users on the Net. The FINGER command had been created in the early 1970s by a computer scientist named Les Earnest at Stanford’s Artificial Intelligence Lab. “People generally worked long hours there, often with unpredictable
FINGER didn’t allow you to read someone else’s messages, but you could tell the date and time of the person’s last log-on and when last he or she had read mail. Some people had a problem with that.
At the height of the FINGER debate, one person quit the Msg-Group in disgust over the flaming. As with the Quasar debate, the FINGER controversy ended inconclusively. But both debates taught users greater lessons about the medium they were using. The speed of electronic mail promoted flaming, some said; anyone hot could shoot off a retort on the spot, and without the moderating factor of having to look the target in the eye.
Perhaps, he said, we could extend the set of punctuation in e-mail messages. In order to indicate that a particular sentence is meant to be tongue-in-cheek, he proposed inserting a hyphen and parenthesis at the end of the sentence, thus: -).
By now he and Kahn had been talking for several months about what it would take to build a network of networks, and they had both been exchanging ideas with other members of the International Network Working Group. It occurred to Cerf and Kahn that what they needed was a “gateway,” a routing computer standing between each of these various networks to hand off messages from one system to the other. But this was easier said than done.
As far as each net was concerned, the gateway had to look like an ordinary host. While waiting in the lobby, he drew this diagram:
“Our thought was that, clearly, each gateway had to know how to talk to each network that it was connected to,” Cerf said. “Say you’re connecting the packet-radio net with the ARPANET. The gateway machine has software in it that makes it look like a host to the ARPANET IMPs. But it also looks like a host on the packet-radio network.”
The challenge for the International Network Working Group was to devise protocols that could cope with autonomous networks operating under their own rules, while still establishing standards that would allow hosts on the different networks to talk to each other.
That September, Kahn and Cerf presented their paper along with their ideas about the new protocol to the International Network Working Group, meeting
By the end of 1973, Cerf and Kahn had completed their paper, “A Protocol for Packet Network Intercommunication.”
Like Roberts’s first paper outlining the proposed ARPANET seven years earlier, the Cerf-Kahn paper of May 1974 described something revolutionary. Under the framework described in the paper, messages should be encapsulated and decapsulated in “data-grams,” much as a letter is put into and taken out of an envelope, and sent as end-to-end packets. These messages would be called transmission-control protocol, or TCP, messages. The paper also introduced the notion of gateways, which would read only the envelope so that only the receiving hosts would read the contents.
Because of all this work done by the IMPs, the old Network Control Protocol was built around the assumption that the underlying network was completely reliable.
The new transmission-control protocol, with a bow to Cyclades, assumed that the CATENET was completely unreliable. Units of information could be lost, others might be duplicated. If a packet failed to arrive or was garbled during transmission, and the sending host received no acknowledgment, an identical twin was transmitted.
The overall idea behind the new protocol was to shift the reliability from the network to the destination hosts. “We focused on end-to-end reliability,” Cerf recalled. ”Don’t rely on anything inside those nets. The only thing that we ask the net to do is to take this chunk of bits and get it across the net...
This highlight has been truncated due to consecutive passage length restrictions.
The invention of TCP would be absolutely crucial to networking. Without TCP, communication across networks couldn’t happen. If TCP could be perfected, anyone could build a network of any size or form, and as long as that network had a gateway computer that could interpret and route packets, it could communicate with any other network. With TCP on the horizon, it was now obvious that networking had a future well beyond the experimental ARPANET. The potential power and reach of what not only Cerf and Kahn, but Louis Pouzin in France and others, were inventing was beginning to occur to people. If
...more
By August 1973, while TCP was still in the design phase, traffic had grown to a daily average of 3.2 million packets.
From 1973 to 1975, the Net expanded at the rate of about one new node each month. Growth was proceeding in line with Larry Roberts’s original vision, in which the network was deliberately laden with large resource providers.
Like most of the early ARPANET host sites, the Center for Advanced Computation at the University of Illinois was chosen primarily for the resources it would be able to offer other Net users.
Large databases scattered across the Net were growing in popularity. The Computer Corporation of America had a machine called the Datacomputer that was essentially an information warehouse, with weather and seismic data fed into the machine around the clock. Hundreds of people logged in every week, making it the busiest site on the network for several years.
DARPA had set out to link the core processing capabilities in America’s top computer science research centers, and as far as the agency was now concerned, it had accomplished that. Its mission was research. It wasn’t supposed to be in the business of operating a network. Now that the system was up and running, it was becoming a drain on other priorities.
In the summer of 1975,DCA took over the network management job from DARPA. Now DCA set operational policy for the network. DCA decided such things as where and when new nodes would be installed and what the configuration of data lines should be. And BBN retained the contract for network operations, which meant the company carried out the decisions made by DCA.
specification. A milestone occurred in October 1977, when Cerf and Kahn and a dozen or so others demonstrated the first three-network system with packet radio, the ARPANET, and SATNET, all functioning in concert. Messages traveled from the San Francisco Bay area through a packet-radio net, then the ARPANET, and then a dedicated satellite link to London, back across the packet-satellite network and across the ARPANET again, and finally to the University of Southern California’s Information Sciences Institute (ISI) in Marina del Rey. The packets traveled 94,000 miles without dropping a single
...more
After the split, TCP would be responsible for breaking up messages into datagrams, reassembling them at the other end, detecting errors, resending anything that got lost, and putting packets back in the right order. The Internet Protocol, or IP, would be responsible for routing individual datagrams.
‘Do the gateways need this information in order to move the packet?’If not, then that information does not go in IP.”
With a clean separation of the protocols, it was now possible to build fast and relatively inexpensive gateways, which would in turn fuel the growth of internetworking. By 1978, TCP had officially become TCP/IP.
But Metcalfe’s idea differed in several respects from the Hawaiian system. For one thing, his network would be a thousand times faster than ALOHANET. It would also include collision detection. But perhaps most important, Metcalfe’s network would be hardwired,
Metcalfe and Lampson, along with Xerox researchers David Boggs and Chuck Thacker, built their first Alto Aloha system in Bob Taylor’s lab at Xerox PARC. To their great delight, it worked. In May 1973 Metcalfe suggested a name, recalling the hypothetical luminiferous medium invented by nineteenth-century physicists to explain how light passes through empty space. He rechristened the system Ethernet.
As a result, maintaining an ARPANET site cost more than $100,000 each year, regardless of the traffic it generated.
The experience that NSF gained in the process of starting up CSNET paved the way for more NSF ventures in computer networking. In the mid-1980s, on the heels of CSNET’s success, more networks began to emerge. One, called BITNET
Another, called UUCP, was built at Bell Laboratories for file transfer and remote-command execution. USENET, which began in 1980 as a means of communication between two machines (one at the University of North Carolina and one at Duke University), blossomed into a distributed news network using UUCP. NASA had its own network called the Space Physics Analysis Network, or SPAN.
Because this growing conglomeration of networks was able to communicate using the TCP/IP protocols, the collection of networks gradually came to be called the “Internet,” borrowing the first word of “Internet Protocol.”
Officially, the distinction was simple: “internet” meant any network using TCP/IP while “Internet” meant the public, federally subsidized network that was made up of many linked networks all running the TCP/IP protocols. Roughly speaking, an “internet” is private and the “Internet” is public.
The market opened up for routers. Gateways were the internetworking variation on IMPs, while routers were the mass-produced version of gateways, hooking local area networks to the ARPANET. Sometime in the early 1980s a marketing vice president at BBN was approached by Alex McKenzie and another BBN engineer who thought the company should get into the business of building routers.
In Canada there was CDNet.
And gradually the Internet came to mean the loose matrix of interconnected TCP/IP networks worldwide.
Several years earlier, the International Organization for Standardization, ISO, had begun to develop its own internetworking “reference” model, called OSI, or open-systems interconnection. Since the 1940s, ISO had specified worldwide standards for things ranging from wine-tasting glasses to credit cards to photographic film to computers. They hoped their OSI model would become as ubiquitous to computers as double-A batteries were to portable radios.
But the Internet community—people like Cerf and Kahn and Postel, who had spent years working on TCP/IP—opposed the OSI model from the start. First there were the technical differences, chief among them that OSI had a more complicated and compartmentalized design. And it was a design, never tried. As far as the Internet crowd was concerned, they had actually implemented TCP/IP several times over, whereas the OSI model had never been put to the tests of daily use, and trial and error.
Cerf and others argued that TCP/IP couldn’t have been invented anywhere but in the collaborative research world, which was precisely what made it so successful, while a camel like OSI couldn’t have been invented anywhere but in a thousand committees.
If anyone could claim credit for having worked tirelessly to promote TCP/IP, it was Cerf. The magic of the Internet was that its computers used a very simple communications protocol. And the magic of Vint Cerf, a colleague once remarked, was that he cajoled and negotiated and urged user communities into adopting it.
On January 1, 1983, the ARPANET was to make its official transition to TCP/IP. Every ARPANET user was supposed to have made the switch from the Network Control Protocol to TCP/IP. On that day, the protocol that had governed the ARPANET would be mothballed, so that only those machines running the new protocols could communicate
As milestones go, the transition to TCP/IP was perhaps the most important event that would take place in the development of the Internet for years to come. After TCP/IP was installed, the network could branch anywhere; the protocols made the transmission of data from one network to another a trivial task. “To borrow a phrase,” Cerf said, “now it could go where no network had gone before.” An impressive array of networks now existed—from the ARPANET to TELENET to Cyclades. There were so many, in fact, that in an attempt to impose some order, Jon Postel issued an RFC assigning numbers to the
...more
In 1983 the Defense Communications Agency decided that the ARPANET had grown large enough that security was now a concern. The agency split the network into two parts: the MILNET, for sites carrying nonclassified military information, and the ARPANET for the computer research community.
The old ARPANET had become a full-fledged Internet.
On the other hand, an American culture of the Internet was growing exponentially, and its foundation was TCP/IP. And while governments throughout Europe were anointing OSI, something of an underground movement sprang up at European universities to implement TCP/IP.
When Sun included network software as part of every machine it sold and didn’t charge separately for it, networking exploded. It further mushroomed because of Ethernet.
computers. And Metcalfe started his own company, 3Com, to sell Ethernet for commercial computers, including Sun machines.
To send traffic from an Ethernet in say, San Diego, to another Ethernet in Buffalo, you sent it through the ARPANET hub. In this way, the ARPANET was the centerpiece of what was called the ARPA Internet. And through the first half of the 1980s, the ARPA Internet resembled a star, with various networks surrounding the ARPANET at the center.
The ARPANET, and later the Internet, grew as much from the free availability of software and documentation as from anything else. (By contrast, Digital Equipment’s DECNET was a proprietary network.) The Internet also supported a wide range of network technologies. Although the satellite and packet-radio networks had finite lifetimes, they helped open developers’ eyes to the need to handle a multitude of different networks.
At the same time, the growth of the network gave rise to a new problem. “When we got to about two thousand hosts, that’s when things really started to come apart,” said Craig Partridge, a programmer at BBN. “Instead of having one big mainframe with twenty thousand people on it, suddenly we were getting inundated with individual machines.” Every host machine had a given name, “and everyone wanted to be named Frodo,” Partridge recalled.
For years, sorting this out was among the most troublesome, messiest issues for the Internet, until at last a group chiseled out a workable scheme, called the domain name system, or DNS.

