Read an Excerpt
Chapter 1: NT Server OverviewPretty much all of the big companies and government offices were already networked by mid-1993, back when the first version of NT arrived. Since network administrators-and most people in general-aren't fond of change, the product must have been good (or especially well marketed) to have been brought to market that late and to have still succeeded.
One analysis says that by the end of the decade, 60 percent of the desktop servers in the world will be running NT. A statistic from early 1996 claimed that one-half of World Wide Web servers are running NT; a Microsoft employee claimed at a 1998 presentation that the number's up to 70 percent. I personally have been amazed at how quickly some of my large Fortune 500 clients have just wiped NetWare off their servers' hard disks and replaced it with NT Server.
If you're a NetWare, VINES, or LAN Server administrator and you've looked around and said to yourself, "What's going on?" or you've decided to finally take the plunge and learn a major network operating system, then you'll want an overview of NT's strengths. In many ways, NT Server is a big departure from previous PC-based server products. Those currently managing a LAN Server, Novell NetWare, or LAN Manager will have to understand the ways that NT Server departs from its LAN Manager past in order to get the most from NT Server.
NT Server Capabilities
Whether it is to be used as a workstation operating system or a server operating system, the NT operating system itself has some quite attractive features. Let's take a look at some of the features that make NT Server stand out from the competition.
Most operating systems are designed from first conception with a particular target processor in mind. Operating system designers get caught up in things like the following:
How many bits does the CPU work with on each operation? CPUs once handled only 8 bits at a time, and then 16-bit processors appeared, 32-bit processors appeared, and now there are 64-bit and even 128-bit processors.
What's the size of the "quantum" of memory that a processor works with? On an Intel Pentium/Pentium II/Xeon chip, it's impossible for the processor to allocate less than 4K of RAM to any given application. For example, if an application wants 2K, then it gets 4K, and if it wants 5K, then it gets 8K. This is called the page size of the processor. On a DEC Alpha CPU, the page size is 8K, so the smallest memory allocation is SK. That means that whether an application wants 2K or 5K, it gets 8K on an Alpha. While this seems like a small thing, it's just the kind of minutiae that operating system designers get caught up in. They embed that 4K or 8K value throughout the operating system code, making the prospect of porting the operating system's code to another processor sound impossible. NT avoided that problem, as you'll see.
Big-endian or little-endian
How are bytes organized in memory? Here is another example of the kind of minutiae that can make an operating system end up extremely processor-dependent. RAM in most desktop computers is organized in 8-bit groups called bytes. (I knew you knew that, but I defined it just in case someone out there doesn't.) But most modern CPUs store data in 32-bit groups. You can write 32 bits as four bytes. Reading those four bytes left to right, let's call them byte one, two, three, and four. Now, here's the question: When the processor stores that one word-that four bytes-in what order should it store it in memory? Some, including NT, store the leftmost byte first and move to the right, so the order in which the bytes appear is one, two, three, and four. Other processors store the bytes in reverse order: four, three, two, one. The first approach is called a little-endian storage approach; the second is called big-endian.
Tons of other things are processor-specific, but those are three good examples. What I want you to understand is the trap that operating system designers can fall into, a trap of building their operating systems to be very specific to a particular processor. When hardware vendors come out with new chips, these are of course faster than most (or no one would pay them any mind), but they also often include some interesting oddball feature, like on-chip support for multimedia or the like. Operating-systems designers asked to develop an OS for this new chip are usually intrigued by the new feature and incorporate it into the OS, figuring that as long as this cool new feature comes free with the chip, why not make the OS more powerful with the feature? Sometimes the powerful-but-gimmicky features of a processor become integral parts of operating systems, while essential features that the processor doesn't support go by the wayside. For example, look at the pervasive 16-bit nature of many Intel-based operating systems, a nature directly attributable to the 8088 and 80286 processors. The first member of the Intel processor family that PC compatibles were built around, the 8086, first appeared in 1977. Eight years went by before a 32-bit Intel x86 processor appeared (x86 refers to the family of PC-compatible processors: the 8086, 8088, 80188, 80186, 80286, 80386, 80486, and Pentium chips). Even though that 32-bit processor has been available since 1985, it took nearly 10 years for 32-bit operating systems to be generally accepted.
When Microsoft designed NT, it initially did not specifically implement it on an x86 chip. Microsoft wanted to build something that was independent of any processor's architecture. They were aware that Microsoft programmers knew the x86 architecture intimately and that the intimate knowledge would inevitably work its way into the design of NT. So, to combat that problem, Microsoft first implemented NT on a RISC chip, the MIPS 84000. Since then, NT has been ported to the x86 series (the 80486, Pentium, and Pentium Pro/II/Xeon/Celeron chips), the PowerPC CPUs, and the Alpha chips.
The parts of NT that are machine-specific are all segregated into a relatively small piece of NT (compared to the total size of the operating system). This small piece is made up of the Hardware Abstraction Layer (HAL), the kernel, and the network and device drivers. Implementing NT on a new processor type, therefore, mainly involves just writing a new HAL, processor, and network subsystem. What does this mean to a network manager? Well, many LANs have used Intel x86-based servers for years. As the needs of the LANs grew, so (fortunately) did the power of the x86 family of Intel processor chips. These chips steadily grew faster and more powerful. When the average network had about 15 users, 286-based servers were around. When people started putting a hundred users on a server, 486s could be purchased.
Unfortunately, however, since 1991 x86 processors haven't really grown in power as quickly as they did previously. RISC machines that are reasonably priced and that offer pretty high-speed processing have begun to appear. That's why the architecture-independent nature of NT Server is so attractive. The next time you need a bigger server, you needn't buy a PC-compatible machine, with all that PC-compatibility baggage weighing the machine down. Instead, you can...