Kindle Notes & Highlights
I have drawn on lots of insights from lots of people, but those people might well not agree with all of my conclusions.
1973, the same year that the two original inventors of the Internet, Robert Kahn and Vinton Cerf, wrote their seminal paper proposing the Internet.
The Internet is made up of routers, but every router is part of some independently operated autonomous system, or AS. Each AS is operated by some entity, which might be a commercial ISP, corporation, university, or another entity. There were about 59,000 ASs across the globe as of 2017.
The World Wide Web is specified by a set of protocols that allow a Web client (often called a browser) to connect to a Web server.5 A Web server (a particular kind of end node attached to the Internet) stores Web pages and makes them available for retrieval on request. The pages have names called URLs (uniform resource locators). The first part of a URL is actually a DNS name, and a browser uses the DNS system to translate that name into the address of the intended Web server. The browser then sends a message to that Web server asking for the page. URLs are disseminated in various ways so that
...more
hypertext transfer protocol,
HTTP, provides the rules and format for messages requesting a Web page.
However, this protocol does not specify the format or meaning of the page itself.
The most common representation of a Web page is HTML, which stands for hypertext markup language.
HTTP uses TCP to move requests and replies. The TCP software takes a unit of data (a file, a request for a Web page, or whatever) and moves it across the Internet as a series of packets.
Finally, the packets that TCP formats are transmitted using IP, which specifies the destination of the packets.
Since many users are not connected full-time, if email were transferred in one step from origin to destination, the transfer could only be successful during those occasional periods when both parties just happened to be connected at the same time. To avoid this problem, almost all email recipients make use of a server to receive and hold their mail until they connect.
The 1960s was the decade of invention and vision. Three different inventors, with different motivations, independently conceived of the idea of packet switching in the early 1960s. Paul Baran at RAND wanted to build a network that was robust enough to provide launch control over nuclear missiles even after a nuclear attack, so as to assure second-strike capability.
Donald Davies, at the National Physical Laboratory in England, had the goal of revitalizing the UK computer industry by inventing user-friendly commercial applications. He saw remote data processing, point-of-sale transactions, database queries, remote control of machines, and online betting as potential applications.
Leonard Kleinrock independently conceived the power of packet switching but was less concerned with specific applications than with whether the idea could work in practice.
number of packet-switched networks were built or contemplated, including the ARPAnet (the terrestrial packet-switched network hooking together computers at different ARPA-funded universities and research labs), an international satellite network (SATnet), and a mobile spread-spectrum packet radio network (PRnet). Faced with the challenge of hooking these disparate networks together, Vinton Cerf and Robert Kahn (1974) proposed the core idea of the Internet.
Initially, machines on the Internet had simple names like “MIT-1.” Jon Postel, at the Information Sciences Institute (ISI) at the University of Southern California, maintained a file that mapped names to IP addresses, which anyone could download as needed. To register a new name, one sent an email to Jon.
While this limitation had not, in practice, prevented the use of email and other applications by a wide range of users, the transition to the commercial backbone opened up the Internet to a much wider range of uses and purposes, including those that were purely commercial and entertainment.
World Wide Web (or just “the Web”) was invented in 1990 by Tim Berners-Lee, then at CERN (the European Organization for Nuclear Research), as a tool for collaboration and information sharing among the physics community. In the first part of the decade, the Web was competing with other schemes for information sharing,
information service (WAIS), Gopher, and Archie, which now exist only as memories for a few of us. By 1993 or 1994, the Web had gained a dominant market share (or “mind share”), based in part on the development in 1993 of Mosaic, the first Web browser to allow Web pages to include both graphics and text.
time capsule of the thinking of the time.
telecommunications industry would not allow the report to use the term “Internet” to describe the nation’s future network infrastructure. Their view was that a loose band of researchers was not qualified to design the network that the nation actually needed and that the telephone companies would in time design the real thing.
They required that the text of the report refer to this future network in a neutral way as a “packet bearer service.” By 1996, it was acce...
This highlight has been truncated due to consecutive passage length restrictions.
such as the Blackberry, emerged early in the decade but again were not a means to get open access to the Internet. To me, the release of the first Apple iPhone in 2007 and the first phones using the Android operating system in 2008 mark the point at which there is a device available to any user of the Internet that permits a wide-ranging use of Internet applications in a mobile context.
with the task of looking across all the projects, seeing what general lessons had been learned, and providing an integrated view of the insights. One of the outcomes of that work is this book.
Today, the traffic from just Netflix and YouTube makes up over half of the total volume on the Internet in North America, and all streaming audio and video add up to over 70 percent of Internet traffic (Sandvine, 2016).
In the 1980s, the Internet was email. In the 1990s, the Internet was the Web. To a generation of users today, the Internet is Facebook, Netflix, YouTube, and the other social media and content-sharing sites.
We get the term cyberspace from a novel by William Gibson, Neuromancer, written in 1984.
. This history is brief, and thus necessarily selective, and it does reflect my personal point of view. There are any number of books documenting the various stages of the Internet’s development. For the early history, two places to start are Abbate (2000) and Hafner (1998).
good books on the current tensions over Internet governance, such as DeNardis (2015).
Without having a shared understanding between writer and reader, there is a risk of failing to communicate, so what does the word architecture mean?
the Internet, even though, as I discussed in chapter 2, the DNS was not part of the initial design. Similarly, although it is not necessary that communicating applications use TCP, so many applications depend on it that it is a mandatory part of the Internet.
was designed by people who came from a computing background, not a classical networking (telephony) background. Most computers are designed without knowing what they are for, and this mind-set defined the Internet’s design.
The early Internet interconnected three communications technologies: the original ARPAnet, SATnet (the wideband experimental multipoint Atlantic satellite network), and a spread spectrum mobile packet radio network (PRnet).
One view is that a long-lived network must be evolvable; it must have the adaptability and flexibility to deal with changing requirements while remaining architecturally coherent.
The goal of evolution over time is closely linked to the goal of operating in different ways in different regions in response to regional requirements such as security. On the other hand, a factor that can contribute to longevity is the stability of the system: the ability of the system to provide a platform that does not change in disruptive ways.
10 years, the most numerous computing device will not be the PC, or even the smartphone or tablet, but most probably the small embedded processor acting as a sensor or actuator that today is called the Internet of Things (IoT). At the same time, high-end processing will continue to grow, with huge server farms, cloud computing, and the like.
In 1988, I wrote a paper titled “The Design Philosophy of the DARPA Internet Protocols” (Clark, 1988),
The following list summarizes a more detailed set of goals which were established for the Internet architecture. 1. Internet communication must continue despite loss of networks or gateways. 2. The Internet must support multiple types of communications service. 3. The Internet architecture must accommodate a variety of networks. 4. The Internet architecture must permit distributed management of its resources. 5. The Internet architecture must be cost effective. 6. The Internet architecture must permit host attachment with a low level of effort.
The resources used in the Internet architecture must be accountable.
It is important to understand that these goals are in order of importance, and an entirely different network architecture would result if the order were changed.
Here, for comparison with the early list from the 1988 paper, is the one I posed in 2008: 2008 1. Security 2. Availability and resilience 3. Economic viability 4. Better management 5. Meet society’s needs 6. Longevity 7. Support for tomorrow’s computing
Exploit tomorrow’s networking 9. Support tomorrow’s applications 10. Fit for purpose (it works?)
The initial concept of TCP was that it could be general enough to support any needed type of service. However, as the full range of needed services became clear, it seemed too difficult to build support for all of them into one protocol.
A typical reliable transport protocol responds to a missing packet by requesting a retransmission and delaying the delivery of any subsequent packets until the lost packet has been retransmitted. It then delivers that packet and all remaining ones in sequence. The delay while this occurs can be many times the round trip delivery time of the net, and may completely disrupt the speech reassembly algorithm. In contrast, it is very easy to cope with an occasional missing packet. The missing speech can simply be replaced by a short period of silence, which in most cases does not impair the
...more
TCP provided one particular type of service, the reliable sequenced data stream, while IP attempted to provide a basic building block out of which a variety of types of service could be built.
It was trivial to invent specifications which constrained the performance, for example to specify that the implementation must be capable of passing 1,000 packets a second. However, this sort of constraint could not be part of the architecture, and it was therefore up to the individual performing the procurement to recognize that these performance constraints must be added to the specification,
The fundamental architectural feature of the Internet is the use of datagrams as the entity which is transported across the underlying networks.
The decision to use the datagram was an extremely successful one, which allowed the Internet to meet its most important goals very successfully.
The original vision for TCP came from Robert Kahn and Vinton Cerf, who saw very clearly, back in 1973, how a protocol with suitable features might be the glue that would pull together the various emerging network technologies. From their position at DARPA, they guided the project in its early days to the point where TCP and IP became standards for the DOD.
The term 5 nines reliability is shorthand for a system that is up 99.999 percent of the time, which would imply a downtime of 5.26 minutes per year. It is often stated, although I can find no specific citation, that the U.S. telephone system was designed to meet this objective. The Internet, for the most part, certainly does not.