- Shopping Bag ( 0 items )
Ships from: Derby, CT
Usually ships in 1-2 business days
Ships from: Austin, TX
Usually ships in 1-2 business days
Ships from: acton, MA
Usually ships in 1-2 business days
Ships from: acton, MA
Usually ships in 1-2 business days
Author, educator, and researcher Andrew S. Tanenbaum, winner of the ACM Karl V. Karlstrom Outstanding Educator Award, carefully explains how networks work inside, from the hardware technology up through the most popular network applications. The book takes a structured approach to networking, starting at the bottom (the physical layer) and gradually working up to the top (the application layer). The topics covered include:
In each chapter, the necessary principles are described in detail, followed by extensive examples taken from the Internet, ATM networks, and wireless networks.
Completely revised and updated from the 1988 edition, this book popular undergraduate text emphasizes fiber optics, wireless communication and a five-layer hybrid model closely resembling TCP/IP (physical, data link, networktransport and application layers). It covers new routing algorithms, including distance vector and link state routing. There is a good discussion of expanded application layer that includes DNS, SNMP, USENET, WWW, HTML, Java, video on demand, and multimedia.
I evaluate every computer book by how many hours it will save me. If it takes twenty hours to read book, then I fully expect it to save me at least twenty hours of frustrated guessing. I'm a businessman first and a hacker second. Everything must have an ROI (Return On Investment).
Computer Networks won't save one minute over the next year. It has no step-by-step procedures, no problem solving sections, and no butt-saving tricks. The only purpose it can serve at a downed site is as a shield against thrown objects from frustrated users. Normally, theoretical books like this one receive a quick skim and are promptly sent to my for-looks-only tome tomb. However, this isn't a normal theoretical book. It's fascinating. In fact, I read it not once but three times. Tanenbaum fills over 700 pages with everything I didn't know, or better still, only thought I knew about networks.
For example, let me tell you what Tanenbaum taught me about modems. I have over ten years trench tech UNIX experience. I've hooked up thousands of modems. I've written more chat scripts, custom dialers, and inittab entries than I have sense to count. I even silently considered myself a modem expert -- before reading Tanenbaum's book.
Being the "expert" that I was, I had used the term carrier many times. I had also used that term many times before my computer days. I worked my way through college at a commercial broadcast station. A broadcast transmitter superimposes audio onto a sine wave called the carrier. However, until I read Computer Networks, I never made the connection between modems and transmitters. Tanenbaum points out that a modem works very much like a commercial transmitter and then proceeds to describe the different types of modem modulation in amazingly great detail. I no longer consider myself a modem expert.
That's why I read this book three times. I learned something new on every page and unlearned at least one misunderstanding in every chapter. I saw the gestalt between seemingly dissimilar things like modems and radio stations. The breadth and depth of the book yielded fresh fruit on every reading.
The book is well organized. Tanenbaum uses a modified ISO OSI (Open Systems Interconnection) reference model as a template for the book. While he drops the session and presentation layers, he leaves the physical, data link, network, transport, and application layers in his reference model. The explanation starts with the physical layer in chapter two and ends with the application layer in the last chapter. This bottom-up explanation is logical and easy to follow.
While the concepts could have been explained with the warmth of a legal brief, Tanenbaum has a gift for explaining them in an entertaining, conversational manner. True, everything must have an ROI. Sometimes, however, when it's entertaining enough, the sole joy of learning is enough of a return.--Dr. Dobb's Electronic Review of Computer Books.
Weighted Fair Queueing
A problem with using choke packets is that the action to be taken by the source hosts is voluntary. Suppose that a router is being swamped by packets from four sources, and it sends choke packets to all of them. One of them cuts back, as it is supposed to, but the other three just keep blasting away. The result is that the honest host gets an even smaller share of the bandwidth than it had before.
To get around this problem, and thus make compliance more attractive, Nagle (1987) proposed a fair queueing algorithm. The essence of the algorithm is that routers have multiple queues for each output line, one for each source. When a line becomes idle, the router scans the queues round robin, taking the first packet on the next queue. In this way, with n hosts competing for a given output line, each host gets to send one out of every n packets. Sending more packets will not improve this fraction. Some ATM switches use this algorithm.
Although a start, the algorithm has a problem: it gives more bandwidth to hosts that use large packets than to hosts that use small packets. Demers et al. (1990) suggested an improvement in which the round robin is done in such a way as to simulate a byte-by-byte round robin, instead of a packet-by-packet round robin. In effect, it scans the queues repeatedly, byte-for-byte, until it finds the tick on which each packet will be finished. The packets are then sorted in order of their finishing and sent in that order. The algorithm is illustrated in Fig. 5-29.
In Fig. 5-29(a) we see packets of length 2 to 6 bytes. At (virtual) clock tick 1, the first byte of the packet on line A is sent. Then goes the first byte of the packet on line B, and so on. The first packet to finish is C, after eight ticks. The sorted order is given in Fig. 5-29(b). In the absence of new arrivals, the packets will be sent in the order listed, from C to A.
One problem with this algorithm is that it gives all hosts the same priority. In many situations, it is desirable to give the file and other servers more bandwidth than clients, so they can be given two or more bytes per tick. This modified algorithm is called weighted fair queueing and is widely used. Sometimes the weight is equal to the number of virtual circuits or flows coming out of a machine, so each process gets equal bandwidth. An efficient implementation of the algorithm is discussed in (Shreedhar and Varghese, 1995).
Hop-by-Hop Choke Packets
At high speeds and over long distances, sending a choke packet to the source hosts does not work well because the reaction is so slow. Consider, for example, a host in San Francisco (router A in Fig. 5-30) that is sending traffic to a host in New York (router D in Fig. 5-30) at 155 Mbps. If the New York host begins to run out of buffers, it will take about 30 msec for a choke packet to get back to San Francisco to tell it to slow down. The choke packet propagation is shown as the second, third, and fourth steps in Fig. 5-30(a). In those 30 msec, another 4.6 megabits (e.g., over 10,000 ATM cells) will have been sent. Even if the host in San Francisco completely shuts down immediately, the 4.6 megabits in the pipe will continue to pour in and have to be dealt with. Only in the seventh diagram in Fig. 5-30(a) will the New York router notice a slower flow.
An alternative approach is to have the choke packet take effect at every hop it passes through, as shown in the sequence of Fig. 5-30(b). Here, as soon as the choke packet reaches F, F is required to reduce the flow to D. Doing so will require F to devote more buffers to the flow, since the source is still sending away at full blast, but it gives D immediate relief, like a headache remedy in a television commercial. In the next step, the choke packet reaches E, which tells E to reduce the flow to F. This action puts a greater demand on E's buffers but gives F immediate relief. Finally, the choke packet reaches A and the flow genuinely slows down.
The net effect of this hop-by-hop scheme is to provide quick relief at the point of congestion at the price of using up more buffers upstream. In this way congestion can be nipped in the bud without losing any packets. The idea is discussed in more detail and simulation results are given in (Mishra and Kanakia, 1992).
5.3.7. Load Shedding
When none of the above methods make the congestion disappear, routers can bring out the heavy artillery: load shedding. Load shedding is a fancy way of saying that when routers are being inundated by packets that they cannot handle, they just throw them away. The term comes from the world of electrical power generation where it refers to the practice of utilities intentionally blacking out certain areas to save the entire grid from collapsing on hot summer days when the demand for electricity greatly exceeds the supply.
A router drowning in packets can just pick packets at random to drop, but usually it can do better than that. Which packet to discard may depend on the applications running. For file transfer, an old packet is worth more than a new one because dropping packet 6 and keeping packets 7 through 10 will cause a gap at the receiver that may force packets 6 through 10 to be retransmitted (if the receiver routinely discards out-of-order packets). In a 12-packet file, dropping 6 may require 7 through 12 to be retransmitted, whereas dropping 10 may require only 10 through 12 to be retransmitted. In contrast, for multimedia, a new packet is more important than an old one. The former policy (old is better than new) is often called wine and the latter (new is better than old) is often called milk.
A step above this in intelligence requires cooperation from the senders. For many applications, some packets are more important than others. For example, certain algorithms for compressing video periodically transmit an entire frame and then send subsequent frames as differences from the last full frame. In this case, dropping a packet that is part of a difference is preferable to dropping one that is part of a full frame. As another example, consider transmitting a document containing ASCII text and pictures. Losing a line of pixels in some image is far less damaging than losing a line of readable text.
To implement an intelligent discard policy, applications must mark their packets in priority classes to indicate how important they are. If they do this, when packets have to be discarded, routers can first drop packets from the lowest class, then the next lowest class, and so on. Of course, unless there is some significant incentive to mark packets as anything other than VERY IMPORTANT - NEVER, EVER DISCARD, nobody will do it.
The incentive might be in the form of money, with the low-priority packets being cheaper to send than the high-priority ones. Alternatively, priority classes could be coupled with traffic shaping. For example, there might be a rule saying that when the token bucket algorithm is being used and a packet arrives at a moment when no token is available, it may still be sent, provided that it is marked as the lowest possible priority, and thus subject to discard the instant trouble appears. Under conditions of light load, users might be happy to operate in this way, but as the load increases and packets actually begin to be discarded, they might cut back and only send packets when tokens are available.
Another option is to allow hosts to exceed the limits specified in the agreement negotiated when the virtual circuit was set up (e.g., use a higher bandwidth than allowed), but subject to the condition that all excess traffic be marked as low priority. Such a strategy is actually not a bad idea, because it makes more efficient use of idle resources, allowing hosts to use them as long as nobody else is interested, but without establishing a right to them when times get tough.
Marking packets by class requires one or more header bits in which to put the priority. ATM cells have I bit reserved in the header for this purpose, so every ATM cell is labeled either as low priority or high priority. ATM switches indeed use this bit when making discard decisions.
In some networks, packets are grouped together into larger units that are used for retransmission purposes. For example, in ATM networks, what we have been calling "packets" are fixed-length cells. These cells are just fragments of "messages." When a cell is dropped, ultimately the entire "message" will be retransmitted, not just the missing cell. Under these conditions, a router that drops a cell might as well drop all the rest of the cells in that message, since transmitting them costs bandwidth and wins nothing - even if they get through they will still be retransmitted later.
Simulation results show that when a router senses trouble on the horizon, it is better off starting to discard packets early, rather than wait until it becomes completely clogged up (Floyd and Jacobson, 1993; Romanow and Floyd, 1994). Doing so may prevent the congestion from getting a foothold....
|1.1||Uses of Computer Networks||3|
|1.8||Outline of the Rest of the Book||78|
|2||The Physical Layer||85|
|2.1||The Theoretical Basis for Data Communication||85|
|2.2||Guided Transmission Media||90|
|2.5||The Public Switched Telephone Network||118|
|2.6||The Mobile Telephone System||152|
|3||The Data Link Layer||183|
|3.1||Data Link Layer Design Issues||184|
|3.2||Error Detection and Correction||192|
|3.3||Elementary Data Link Protocols||200|
|3.4||Sliding Window Protocols||211|
|3.6||Example Data Link Protocols||234|
|4||The Medium Access Control Sublayer||247|
|4.1||The Channel Allocation Problem||248|
|4.2||Multiple Access Protocols||251|
|4.7||Data Link Layer Switching||318|
|5||The Network Layer||343|
|5.1||Network Layer Design Issues||343|
|5.3||Congestion Control Algorithms||384|
|5.4||Quality of Service||397|
|5.6||The Network Layer in the Internet||431|
|6||The Transport Layer||481|
|6.1||The Transport Service||481|
|6.2||Elements of Transport Protocols||492|
|6.3||A Simple Transport Protocol||513|
|6.4||The Internet Transport Protocols: UDP||524|
|6.5||The Internet Transport Protocols: TCP||532|
|7||The Application Layer||579|
|7.1||DNS - The Domain Name System||579|
|7.3||The World Wide Web||611|
|8.5||Management of Public Keys||765|
|9||Reading List and Bibliography||835|
|9.1||Suggestions for Further Reading||835|
This book is now in its fourth edition. Each edition has corresponded to a different phase in the way computer networks were used. When the first edition appeared in 1980, networks were an academic curiosity. When the second edition appeared in 1988, networks were used by universities and large businesses. When the third edition appeared in 1996, computer networks, especially the Internet, had become a daily reality for millions of people. The new item in the fourth edition is the rapid growth of wireless networking in many forms.
The networking picture has changed radically since the third edition. In the mid-1990s, numerous kinds of LANs and WANs existed, along with multiple protocol stacks. By 2003, the only wired LAN in widespread use was Ethernet, and virtually all WANs were on the Internet. Accordingly, a large amount of material about these older networks has been removed.
However, new developments are also plentiful. The most important is the huge increase in wireless networks, including 802.11, wireless local loops, 2G and 3G cellular networks, Bluetooth, WAP, i-mode, and others. Accordingly, a large amount of material has been added on wireless networks. Another newly-important topic is security, so a whole chapter on it has been added.
Although Chap. 1 has the same introductory function as it did in the third edition, the contents have been revised and brought up to date. For example, introductions to the Internet, Ethernet, and wireless LANs are given there, along with some history and background. Home networking is also discussed briefly.
Chapter 2 has been reorganized somewhat. After a brief introduction to the principles of datacommunication, there are three major sections on transmission (guided media, wireless, and satellite), followed by three more on important examples (the public switched telephone system, the mobile telephone system, and cable television). Among the new topics covered in this chapter are ADSL, broadband wireless, wireless MANs, and Internet access over cable and DOCSIS.
Chapter 3 has always dealt with the fundamental principles of point-to-point protocols. These ideas are essentially timeless and have not changed for decades. Accordingly, the series of detailed example protocols presented in this chapter is largely unchanged from the third edition.
In contrast, the MAC sublayer has been an area of great activity in recent years, so many changes are present in Chap. 4. The section on Ethernet has been expanded to include gigabit Ethernet. Completely new are major sections on wireless LANs, broadband wireless, Bluetooth, and data link layer switching, including MPLS.
Chapter 5 has also been updated, with the removal of all the ATM material and the addition of additional material on the Internet. Quality of service is now also a major topic, including discussions of integrated services and differentiated services. Wireless networks are also present here, with a discussion of routing in ad hoc networks. Other new topics include NAT and peer-to-peer networks.
Chap. 6 is still about the transport layer, but here, too, some changes have occurred. Among these is an example of socket programming. A one-page client and a one-page server are given in C and discussed. These programs, available on the book's Web site, can be compiled and run. Together they provide a primitive remote file or Web server available for experimentation. Other new topics include remote procedure call, RTP, and transaction/TCP.
Chap. 7, on the application layer, has been more sharply focused. After a short introduction to DNS, the rest of the chapter deals with just three topics: e-mail, the Web, and multimedia. But each topic is treated in great detail. The discussion of how the Web works is now over 60 pages, covering a vast array of topics, including static and dynamic Web pages, HTTP, CGI scripts, content delivery networks, cookies, and Web caching. Material is also present on how modern Web pages are written, including brief introductions to XML, XSL, XHTML, PHP, and more, all with examples that can be tested. The wireless Web is also discussed, focusing on i-mode and WAP. The multimedia material now includes MP3, streaming audio, Internet radio, and voice over IP.
Security has become so important that it has now been expanded to a complete chapter of over 100 pages.
It covers both the principles of security (symmetric- and public-key algorithms, digital signatures, and X.509 certificates) and the applications of these principles (authentication, e-mail security, and Web security). The chapter is both broad (ranging from quantum cryptography to government censorship) and deep (e.g., how SHA-1 works in detail).
Chapter 9 contains an all-new list of suggested readings and a comprehensive bibliography of over 350 citations to the current literature. Over 200 of these are to papers and books written in 2000 or later.
Computer books are full of acronyms. This one is no exception. By the time you are finished reading this one, the following should ring a bell: ADSL, AES, AMPS, AODV, ARP, ATM, BGP, CDMA, CDN, CGI, CIDR, DCF, DES, DHCP, DMCA, FDM, FHSS, GPRS, GSM, HDLC, HFC, HTML, HTTP, ICMP, IMAP, ISP, ITU, LAN, LMDS, MAC, MACA, MIME, MPEG, MPLS, MTU, NAP, NAT, NSA, NTSC, OFDM, OSPF, PCF, PCM, PGP, PHP, PKI, POTS, PPP, PSTN, QAM, QPSK, RED, RFC, RPC, RSA, RSVP, RTP, SSL, TCP, TDM, UDP, URL, UTP, VLAN, VPN, VSAT, WAN, WAP, WDMA, WEP, WWW, and XML. But don't worry. Each will be carefully defined before it is used.
To help instructors using this book as a text for a course, the author has prepared various teaching aids, including
The solutions manual is available directly from Prentice Hall (but only to instructors, not to students).
Posted July 25, 2003
Like all of Tanenbaum's books, this one contains excruciating details on all aspects of the title subject. More useful for hardcore introductory courses or as a reference when writing papers. I usually wear-out a couple of highlighters on each of his books and fill the margins with notes, trying to keep track of all the data he's dumping on me.
0 out of 1 people found this review helpful.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted March 26, 2001
This book is an excellent book to read about, if your interests are towards networking. I gave this book 4 stars because this book is an excellent but not Superior.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted May 14, 2001
This book was used in my 1986 Telecommunications Graduate School Course. An excellent technical reference book with plenty of software algorithms as well as theory on the Open System Interconnection Standards and history of telecommunciations. The book has been useful as an OSI ISO model reference over the years and helped me understand telecom better.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted February 9, 2001
Posted December 29, 2000
This book, while pricey, is an excellent book for those truely interested in learning how networks operate. For those who could care less about learning and are just looking to pass certification exams this book (and most others) are worthless (as are the certifications these 'dumpers' obtain).Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted December 5, 2000
This is one of the best computer network textbook for engineers and engineering student. Very comprehensive and detailed networking explanation. This is definitely not for some business-man, no techie person. I strongly recommend this for the overall understanding of the network. This is not a reading book. This is definitely a study book for those who are interested in network.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.