Windows NT Networking for Dummies

Windows NT Networking for Dummies

by Ed Tittel, E. Ed. Levy
If you've been thrust into the crazy world of network management with a Windows NT server, Windows NT Networking For Dummies is definitely the book for you! The hot new version of Windows NT, Version 4.0, is explained with clear examples of even the most challenging NT features and functions.

Inside, find helpful advice on how to

  • Identify network


If you've been thrust into the crazy world of network management with a Windows NT server, Windows NT Networking For Dummies is definitely the book for you! The hot new version of Windows NT, Version 4.0, is explained with clear examples of even the most challenging NT features and functions.

Inside, find helpful advice on how to

  • Identify network basics needed to establish and maintain a successful network
  • Uncover the differences between NT Workstation and NT Server and figure out what's right for you
  • Find out which devices will snuggle up to NT -- and those that won't
  • Explore ways of configuring various devices to work well with NT
  • Simplify network management and maintenance by using the best software utilities
  • Set up and manage an NT network with ease
  • Master the art of remote management
  • Get the facts on naming services
  • Plus Ed, Mary, and Earl's Top Ten Lists of NT Tips

Product Details

Wiley, John & Sons, Incorporated
Publication date:
For Dummies Series
Product dimensions:
7.39(w) x 9.18(h) x 1.02(d)

Read an Excerpt

Chapter 3
Message in a Bottle: Mastering the Art of Protocols

In This Chapter

  • Figuring out how protocols work
  • Learning the language that lets computers speak
  • Moving messages

Chapter 1 describes the three fundamental components of networking: connections, communications, and services. In this chapter, you leave the wires and interfaces behind and go inside the network to take a look at communications -- how senders and receivers field messages moving across the network.

Communications rely on a shared set of rules for exchanging information and for defining things at the most rudimentary levels, such as how to present digital data -- what's a one and what's a zero? The rules also determine the format and meaning of network addresses and other essential information.

In this chapter, we stick with the plumbing metaphor. You've already looked at the pipes; now it's time to look at what the pipes carry -- messages and data that computers send to each other. This should help you better understand how computers communicate on your network -- and why they occasionally don't.

How Do Computers Talk to Each Other?

Table 1-1 compared a computer conversation to a human conversation and showed that any communications between humans or machines have much in common. A trivial difference is that computers use ones and zeroes to communicate and humans use words. There are also some real differences, however. Understanding those will help you understand networking.

DWIM -- Do what I mean!

In human communication, what's being said is always interpreted and often misunderstood. What one person says is not always what the other person hears. Human communication, like the communication of computers over a network, relies on shared rules and meanings and a common frame of reference.

Computers, however, are linear. They can do only what they're told to do. For computers to exchange information, every piece of that information must be explicitly supplied; computers are not strong on picking up implications and subtlety. To communicate, computers have to begin in complete agreement about the following issues. (The questions are phrased from a computer's point of view.)

  • What's my address? How do I find it out? How do I find out other computers' addresses?

  • How do I signal another computer that I'm ready to send or receive a message? That I'm busy? That I can wait if it's busy?

These are the fundamentals, but they are only the tip of a large technological iceberg. Before computers can communicate with each other, everything must be completely mapped out and implemented in software that supplies all the answers to these questions in a way that all the computers know and understand. These answers form the basis of a set of rules for computer communications, rules the computers use to handle networking.

Standardized rules

Building a complete set of communications rules is time-consuming and picky and would bore most people out of their skulls. In the early days of the computer industry, individual companies or groups put a bunch of programmers to work building programs to do whatever they needed to have accomplished. As time went on, this process resulted in many different ways of doing such things as networking, none of which would work with the way talented programmers over at another company did the same things.

These incompatibilities were not a big deal (or so it seemed) in the beginning. As networks became a more common part of the business landscape, however, it seemed natural for people who bought computers from companies A and Z to wonder, "Well, gee, if my company A computers can talk to each other and my company Z computers can talk to each other, why can't the As talk to the Zs and vice versa?"

Why not, indeed?

Uncle Sam played an important role in bringing order to this potential network chaos. When the government tried to get their A and Z computers to talk to each other, they learned that they had a monster compatibility problem. A consensus emerged that a set of rules was absolutely necessary for networking. The industry also learned that networking was difficult, if not impossible, when everyone didn't share the same set of rules.

If this story had a happy ending, it would be: "Nowadays there's only one set of networking rules, and everyone uses the rules wisely and well." Unfortunately, there's no happy ending. Although the chaos has been reduced, there's still plenty to go around, and vendors trying to stay in the vanguard seem to be inventing more rules as they go.

According to Protocol

Just to keep things simple, these "sets of networking rules" are usually called networking protocols -- they're also referred to as networking standards, or even as standard networking protocols. You get the drift.

In diplomacy, protocol is the code of correct procedures and etiquette that representatives from sovereign governments follow to prevent all-out war. For example, protocol is the reason why diplomats refer to screaming matches as "frank and earnest discussions" or to insoluble disagreements as "exploratory dialogue." Political double-talk aside, the word protocol nicely captures the flavor of rules for network communications.

The concept of the networking protocol is based on the premise that any two computers must have an identical protocol in common to communicate; that is, the computers are peers in any communication. The particular protocol defines the language, structure, and rules for that communication.

Suites for the...never mind!

Although this book is about Windows NT Server and focuses its attention on the Microsoft protocols, you should be aware that these protocols are just one of a large number. Microsoft has become surprisingly catholic in its support for protocols, including support not only for government standards (TCP/IP) but also for the Novell IPX/SPX protocols. That may be because IPX/SPX was built to enable desktop computers, including PCs, to communicate and also because there are more PCs on the nation's desktops than any other kind of computer. And, too, the government's finger is in many pies, and the Internet uses the same protocols, so it is not surprising that NT Server supports today's most popular networking protocols.

Raising the standards

An interesting -- not to say confusing -- thing about networking rules is that both vendors and standards groups call their stuff a "standard." Some vendors expend a lot of hot air talking about the difference between de facto and de jure standards. De facto means "It ain't official, but a lot of people use it, so we can call it a standard if we want." De jure means "It's a standard because XYZ (a standards-setting group) has declared it to be so and has published this foot-high stack of documentation to prove it."

Behind the sometimes heated discussion of what is or is not a standard is a control issue. Purists -- including academicians, researchers, and techno-weenies -- flatly assert that only a standards-setting group can be "objective and fair," and, therefore, only they can adequately handle the job of selecting the very best that technology has to offer by putting it in their standard -- thus making this the best of all possible worlds (and everything in it a necessary evil).

The other heat source comes from vendors' desperate race to keep up with the market and customers' demands by heroically struggling to get their products off the drawing board and out the door. "Of course we have to be in control of our technology," they say. "It's the only way we can keep up!" The objectivity, fairness, and leading-edge characteristics of most standards are not disputed, but establishing standards involves groups of individuals who must agree on them, which takes time. In the meantime, technology continues to evolve, and nothing goes stale faster than leading-edge technology.

Whether networking technologies are standards or not, or de facto or de jure, doesn't matter: The action is where the markets are. Vendors must be involved on both sides of the debate to some extent because they cannot afford to miss any of the technology boats leaving port. Some astute vendors have published their "standards" and given customers and industry people sufficient documentation and input both to keep things working and to keep up with the development of the technology.

Some standards bodies have been wise enough to realize that a standard is a good thing only when it's widely implemented and have given vendors opportunities to deal with the real-world concerns of getting products to market. The winners in both camps are the protocols that are used the most.

One last remark on protocols: They rarely, if ever, occur in the singular. Most networking protocols consist of a named collection of specific message formats and rules for interaction rather than a single set of formats and rules. For that reason, protocols are also called protocol suites, not because they like to lounge around on matched collections of furniture, but because they travel in packs.

A protocol's work is never done

So now you know that your computer cannot talk to another computer without sharing a common protocol. Where does this protocol stuff come from? Put differently, who decides which protocols are used?

Protocols cover networking capability from software to hardware. The programs that let your computer access the network must use a protocol. This protocol holds all the way down to the edge of the hardware, where the computer says "send this message" to talk to the network or "give me the message," depending on what the hardware is telling it.

Protocols come a little from here and a little from there. For example, most protocols don't care which kind of network they're talking through; in most cases, they don't even notice if it's an ARCnet, Ethernet, or token-ring network. This is because part of the software that provides network capability comes from a LAN driver and part of it comes from other sources.

The LAN driver on a computer tells it exactly how to talk to the network interface in your machine. If you're lucky, you use a network machine, such as a network-ready PC or a Sun workstation. Otherwise, you have to locate and install a LAN driver on your computer so it can talk to the network.

Some applications may know how to communicate directly with a network through a special kind of software interface. Applications with this kind of savvy used to be scarce, but they're becoming more common as networks become more widespread. Other applications may use standard computer system access and end up talking through the network, totally unaware that the network is being accessed.

The key to network access from applications or from a computer's operating system is a collection of software that implements the protocol suite being used. The operating system, such as DOS or Windows 95 on a PC, is a program that keeps the computer running and capable of doing the jobs you ask it to perform.

If an application or operating system is network-savvy, the vendor may supply all or part of the network access software, including the LAN driver.

For Windows NT Server, for example, Microsoft supplies software for most of its desktop clients: Windows (in all its many flavors), UNIX, OS/2, and Macintosh can make use of their own networking software, whereas DOS uses networking software that Microsoft supplies along with NT Server (which can also be obtained from other sources). For a visual aid to these possible software relationships, look at Figure 3-1. You'll notice that software components, called shells or requesters, are needed to communicate over the network. The figure also shows that you'll need a LAN driver to provide the link between software and hardware.

The Dance of the Seven Layers

Okay, you now know that a protocol suite lets computers share a set of common communications rules. You also know that the protocols handle the movement of information between the hardware on the network interface computer and the applications you run on your machine. The next question is, what's going on between the applications and the hardware while the protocols do their thing?

Much of the interaction between applications and hardware consists of taking messages, breaking them down, and stuffing them into envelopes as you move farther from the application and closer to the hardware. From the other direction -- hardware to software -- the protocols unpack envelopes and stick individual pieces together to build a complete message. We hope that it is a meaningful message, but remember that the one immutable law of computers is GIGO, or garbage in, garbage out.

It might help to think of a post office analogy. The post office handles anything that has an address on it and sufficient postage to pay its way. How is a letter delivered? It goes something like this:

  1. You address a letter and drop it in a mailbox.

  2. The mail carrier who empties the mailbox gets the letter.

  3. The mail carrier delivers the letter to your local post office.

  4. The mail sorters check the zip code and route the letter.

  5. The letter is shipped to the post office that services the destination address.

  6. The mail sorters check the street address and route the letter to the appropriate mail carrier.

  7. The mail carrier delivers the letter to the address, and the recipient gets the letter.

The basic requirements for successful mail delivery are timely pickup, short transit time, and correct delivery. Factors that affect transit time and delivery (barring disgruntled postal workers with firearms) are correct identification of and routing to the mailing address.

The similarity between networking protocols and the postal service lies in the capability to recognize addresses, route messages from senders to receivers, and provide delivery. The major difference is that the postal service, unlike networking protocols, doesn't normally care what's in the envelopes or packages you send as long as they meet size, weight, and materials restrictions. Networking protocols spend most of their time dealing with envelopes of varying sizes and kinds.

For instance, suppose that you want to copy a file from your computer to another computer across the network. The file is sizable, about 1MB. It consists of a spreadsheet covering your sales forecast for the next quarter, so you want it to get there quickly and correctly.

To use the post office (or snail mail), you would copy the file to a floppy and mail it to the recipient, but that's not fast enough. Over the network, it will get there in less than a minute. While the file is moving from your machine to the other machine, there's a lot going on that you aren't privy to.

Size considerations -- the biggest chunk of data you're allowed to move across the network -- are only one reason why messages are put into envelopes. Handling addresses is another reason. In the post office example, the pickup post office cares only about the destination zip code, whereas the delivering mail carrier is concerned with the street address. Along the same lines, one protocol might care about only the name of the computer to which the file is to be shipped; at a lower level, however, the protocol needs to know where to direct the chunks of data moving from sender to receiver as well so that the file can be correctly reassembled upon delivery.

The protocol software spends most of its time taking things apart to deliver them accurately. When you're receiving something, it spends its time stripping off packaging information and putting things back together. The sender and receiver keep talking to each other while all this is going on to monitor the accuracy and effectiveness of their communications and to determine if and when delivery is complete. Protocols also spend a lot of time keeping track of the quality and usability of network links.

We hope this chapter has helped to clear up some of the mysteries that surround protocols. The important thing to remember, though, is that protocols simply allow computers to communicate.

Customer Reviews

Average Review:

Post to your social network


Most Helpful Customer Reviews

See all customer reviews