×

Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

UNIX and Windows 2000 Integration Toolkit: A Complete Guide for System Administrators
     

UNIX and Windows 2000 Integration Toolkit: A Complete Guide for System Administrators

by Rawn Shah, Thomas Duff (Joint Author)
 

All the step-by-step guidance and tools you'll need to integrate UNIX and Windows 2000

More and more companies with UNIX systems are turning to Windows 2000 to support specific departmental functions. Since it's not practical or efficient to run these two systems separately, network and IT managers must find new ways to integrate Windows 2000 with UNIX. Written

Overview

All the step-by-step guidance and tools you'll need to integrate UNIX and Windows 2000

More and more companies with UNIX systems are turning to Windows 2000 to support specific departmental functions. Since it's not practical or efficient to run these two systems separately, network and IT managers must find new ways to integrate Windows 2000 with UNIX. Written by an expert in the field, this book provides the tools and techniques necessary to successfully combine and manage both systems.

The author clearly shows how to plan an integration strategy, select the appropriate integration products, and establish policies on how to administer and use these systems. The book offers a comprehensive overview of the UNIX and Windows 2000 operating systems-from UNIX file systems and user interfaces to Windows 2000 user-mode components and system controls. Detailed information is provided on how to integrate, install, and configure these systems to meet the needs of a growing organization. And helpful management strategies are included that will greatly enhance the performance and security of the integrated system.

Along with in-depth coverage of the latest features of Windows 2000, this book:

• Presents a step-by-step planning methodology for integration

• Provides network protocol and services configuration

• Discusses Remote Access Services and Virtual Private Networks

• Explains system performance enhancement and monitoring

• Details systems and network security management

• Includes in-depth coverage of commercial integration packages

The Web site at ww.wiley.com/compbooks/shah provides:

• Descriptions and listings of cross-platform integration products

• Comparisons between products and links to product reviews

• More worksheets and scripts to help with integration

• Advice on current methods in integration

• Advice on current Windows 2000 and UNIX innovations

The CD-ROM contains scripts, worksheets, and information for various operating systems to help integration tasks.

Editorial Reviews

Booknews
"And on the 6.9th day, He created UNIX." So begins this tutorial on successfully fulfilling the trend toward integrating venerable UNIX operating systems with Windows 2000. An NT and UNIX administrator who helped create , and magazines covers the basics of each system, planning for integration, NT installation and network configuration, and management strategies. The CD-ROM , linked to an auxiliary Web site, includes mostly freeware software and Frequently Asked Questions. Lacks references. Annotation c. Book News, Inc., Portland, OR (booknews.com)

Product Details

ISBN-13:
9780471293545
Publisher:
Wiley
Publication date:
05/26/2000
Edition description:
BK&CD ROM
Pages:
512
Product dimensions:
7.52(w) x 9.27(h) x 1.22(d)

Read an Excerpt

Note: The Figures and/or Tables mentioned in this sample chapter do not appear on the Web.

The UNIX Operating System

Before we jump onto the trampoline of delights that is the process of integrating UNIX and Windows 2000, we first need a stable position to jump from. In this respect, we begin by a succinct introduction to UNIX. It's not that we don't think you already know all there is to know about it, but we want to give comparable concepts between the two operating systems, so that when we begin talking about Windows 2000, you will understand what we mean.

This isn't an utter, complete, comprehensive, all-encompassing, every-bit- about-everything review of UNIX, only most of the basic ideas on how the system works and runs. It tries not to go too much into the programming depths of the operating system (OS), but then if you were interested in that, you'd be reading a hardcore book on the internals of UNIX instead. It also cannot cover every tool and command that is available in UNIX across all the versions because of the years of variations between them. What's more, we expect that you may have a bit of experience with UNIX as it is. What this book will give you is a working understanding of how UNIX runs so that you can compare it against the Windows 2000 architecture.

UNIX Alive!

UNIX is undoubtedly one of the most resilient operating systems ever. It is also the only operating system still in wide commercial use that's almost 30 years old. It has evolved over the years to take advantage of new ideas in computer science as well as new hardware technology. Yet, there are ideas and concepts in UNIX that its progenitors could point to and see similarities in since its first inception. Today, UNIX can be seen running in the largest massively parallel computers in the world, as well as some of the smallest handheld devices. What we really see is not one operating system but a whole family of them, running on different processor types and system motherboards, and with different capabilities. And all it took to start the whole shebang was one idle programmer who wanted to play a game of Space Travel. . . .

UNIX Operating System Layers

The UNIX operating system is layered like most others and was one of the first to introduce a separation between the kernel and the user space (see Figure 1.1). Essentially, the model is like a cross-section of a peanut. At the very center is the kernel, which constitutes the core components of the operating system. Wrapped around this are other layers, which build up the applications and tools, and covered at the very outside is the shell, the environment that the users interact with.

Although the original OS model was a monolithic design in which all kernel operations were combined together into the same executive, today there are microkernel versions of the operating system as well, separating each component, such as memory management, file systems, input/ output (I/ O) processing, and so forth, into separately loadable kernel components. The System V and BSD device driver models differ quite radically. BSD devices are typically monolithic, whereas STREAMS drivers in System V allow drivers to layer themselves over each other to facilitate extensibility.

The kernel/ user space separation allowed developers to cleanly implement user applications without having to worry that changes to the system components would have adverse affects on their code. This was accomplished by providing a standard set of system libraries for programmers to develop and allowing other applications to add other libraries on top of these. The actual contents of the libraries vary to some degree from one version of UNIX to another, rendering some applications incompatible between different UNIX platforms.

Processes and Threads

A process in UNIX is a specific instance that an application program runs in. This includes all the memory required to run the program, any devices that it owns or uses, and control or execution information for the program. The process is essentially the most basic entity that can be scheduled to run in the operating system. All processes share execution time on the central processing unit (CPU), hence the old description of UNIX as a time-sharing system. In a multiprocessor system, several processes may be running at the same time, but only one per CPU.

All processes are children of the first process, known as init, except for the swapper and pagedaemon processes for memory management. When the system is booted it creates the data structures associated with the series of terminal devices and network devices needed for users to log in. When a user logs in, the init process creates or spawns a new child process that is assigned to the user, using the fork system call. Every process has a process identifier (PID) by which the kernel and all applications can refer to it. When a new child is created, it is assigned a new PID, as well as a pointer to its parent process, known as the parent process identifier (PPID). This first process after login is normally the user's command-line interface or shell, and the launching point for other applications.

A process can be put on ice temporarily, and go into the stopped or suspended state. This is usually done either by the program that is running within or by direct control of the parent process. At this point the process is sitting idle, not executing in memory or performing any tasks. The process can later be unsuspended when it receives the continue signal from another process. A process can also be put to sleep, to wait for a certain amount of time to go by. During this state, the process is actively waiting for a timer to go off, and not terminated.

When a process terminates or is killed, it first goes into a zombie state, waiting for any devices associated with it to close properly and do their housecleaning. The process information is then destroyed and removed from the system. This does not destroy the application that was running in the process just the specific instance that was running within the process. Once terminated, the process no longer exists on the system.

Every process contains a certain set of data structures: the address space, the control information, the security information, the hardware context, and process environment variables. The address space of a process is subdivided into several parts to hold the executable code of the program, the program stack that describes the flow of the application, and the data heap that stores the variables of the program. The control information contains important data structures that point to all the other data structures. The security information indicates who owns the process and what permissions they are allowed. The hardware context is a set of hardware-related registers and pointers that indicate at which point the application is running. Finally, the environment variables contain information inherited from the parent process.

Processes are scheduled for execution in a system known as preemptive round robin. Each process is allowed a certain amount of time, called a quantum, that it can execute in, and when it is done, the system puts the current process to sleep and moves to the next one.

Processes also have a priority value indicating which ones should be run first. This is actually a pair of values known as the base priority and the current or scheduled priority. The base priority defines the process's overall ranking on the system and the current priority indicates what it is set to at the current time. Those with the same base priority are scheduled to execute one after the other, then followed by those with a lower priority. UNIX uses lower numerical values to indicate higher priorities, so those with a value of 1 will take precedence over all others. Since this could lead to some processes running all the time while others wait forever, the current priority value is used as a breaker. A process that has been waiting for a long time to run is temporarily bumped up in priority so that it can execute the next chance it gets. Once it is done, it can go hide with the others under the stairs, waiting at the low base priority. This cycles all the processes through over time. Since each process typically runs within 100 milliseconds or so on the processor, all the processes almost appear to run in real time to us comparatively slow humans, when, in fact, they don't.

All processes may respond to a common set of predefined signals that can be sent by the system or other processes. These signals are handled either by the parent of the process or by the process itself. Common signals include interrupt process, quit process, suspend process, send a hardware error, indicate an I/ O control message, indicate a memory access error, indicate that a timer has gone off, and so forth. A listing of the common signals, their default behavior, and what they do is provided in Table 1.1.

All signals initiate some action on the behalf of the process, and almost all can be caught or trapped by a piece of code in the program known as the signal handler. This handler then executes some other piece of code in response to the signal. A default signal handler that is part of the user's session will cause an action if the application itself does not trap it first.

Some UNIX systems also support threads within a process, to define an even smaller unit of execution that contains code but shares most of its memory with other threads that are part of the process. By designing an operating system with support for threads in the kernel, you make it ready to run in multiprocessor machines, since threads can often run separately at the same time on separate CPUs.

Kernel-mode threads are similar in concept but do not run in the context of any user process. They are normally used by the kernel to handle fast asynchronous I/ O operations such as memory paging and communicating with file systems. They can also be used to develop kernel-mode applications that can run even faster than normal user-mode applications, but also pose a threat to the stability of the system if not done right.

Memory Management in UNIX

UNIX implements a virtual memory management system that creates a total memory space of 4GB per process for 32-bit machines and signifi- cantly more (it varies) for 64-bit machines. This is accomplished by clever use of physical memory, and by storing most of it onto hard drives.

Virtual memory allows all the processes on the machine to run without interfering with one another in memory, allows multiple programs to be kept in memory, and, most important, relieves programmers of constantly having to work on managing the memory use of their applications. At the same time, virtual memory can support much more memory space that programmers can use than what is actually available in physical hardware. After all, there was a time not too long ago (around 1995) when 4GB of physical memory on a machine was worshipped, available only in the largest servers on the market. Even today, when it's not uncommon to come across small personal computer (PC) departmental servers with 4GB of physical random-access memory (RAM), virtual memory still plays an important part.

Virtual memory management came to UNIX in the late 1970s after the first public release of Version 6. It really needed to wait for computers that could actually support virtual memory, such as the PDP-11 series.

The original method of virtual memory management in UNIX was to swap entire programs in and out of memory onto a hard-drive-based swap space. This was either a file or a special file system where a copy of the program in memory was stored. Later, with the arrival of the very popular Digital VAX computer series, came the concept of paged memory. In this system, all memory is stored in fixed units called pages, which are usually a few kilobytes big. Every process is allocated a certain number of pages in virtual memory into which it loads its programs. When the process is set to run by the operating system, these pages move from virtual memory into matching page frames or physical pages within the physical RAM itself. By working with pages, it is possible to work in much smaller fixed units in a much faster fashion.

Paged virtual memory has been around ever since. Although there are complex algorithms that decide which pages to leave in physical memory and how to replace them, the concept remains the same. The older concept of a swap space still exists, but these now hold pages of memory instead of whole programs. In some UNIX systems, there can even be multiple swap spaces on different drives to speed the whole virtual memory system by having them work in parallel across the drives.

There is also a concept of segments in UNIX memory management. Each processes address space is divided into several segments, each of which consists of a contiguous area of memory that can be loaded into physical memory. The system can load the separate code and data pages for an application into different segments. Since code pages are very likely to be in consecutive blocks of memory, this works fairly well. By combining the paging and segmentation system, it becomes possible to manage memory more easily for both the operating system and the applications.

UNIX File Systems

UNIX file systems have been around for a long time without any significant change, indicating that they were mostly done right the first or second time around. The most widely used one today is the Berkeley Fast File System (FFS), now just called the UNIX file system (ufs). Other new file system types emerged in the 1990s to cover some of the shortcomings not anticipated in the earlier ones. They recover much better and implement concepts such as software-based redundant arrays of independent drives (RAIDs). Even more file systems exist in the distributed arena, intended to merge the file systems of multiple computers so that the combined storage can be used by all. This section focuses mostly on how the basic UNIX file system stores and accesses data, as well as some of the common file system constructs across the UNIX versions.

Files and Directories in UNIX

In UNIX today, filenames can usually be up to 255 characters in length, although this still varies from one UNIX version to another. Any 7-bit ASCII characters are allowed, including all characters on the keyboard, except for the forward slash (/). UNIX files can have file extensions to indicate their document type, but this is simply included as part of the name, and is not a separate entity as in MS-DOS. Thus, you can have many extensions successively following each other in the filename (e. g., theword. doc. tar. gz). The file system leaves this to the application to sort out. Filenames are case sensitive, so two names like Smith. doc and smith. doc are considered two distinct files. The names of files need to be distinct only when they are in the same directory.

There is an added distinction for files that begin with a period character. A single period character is always used to indicate the current directory. Thus the system, users, and applications can always see a listing called "." in their current directory. In addition, two periods (..) are used to indicate the parent directory. Finally, all files that begin with a period followed by any other series of characters (e. g., .cshrc, .profile, or .xinitrc) are referred to as hidden files, since they show up in a directory listing only when you give a special flag. These hidden files are usually configuration and data files for other applications that are needed per user but simply clutter up their screens, and so are put away so that they don't bother users. Other than this, there is no difference in the file system structure of these files compared to others.

The human view of the UNIX file system is that it is laid out like a tree blown over by a tornado, with the roots on top and the branches and leaves below. At the very top is the root of the file system, always designated by the single character / (forward slash). Each subsequent directory below this has its own name, which follows the same naming scheme as regular files, described earlier. Every file can be referred to by its pathname, that is, the list of directories starting at the root, down through the subdirectories, and, finally, to the filename itself. For example, /usr/ local/ games/ mahjongg indicates that the file called mahjongg is stored in the games directory under the subdirectory labeled local, under usr, and the root directory. To distinguish that it is traversing directories, we use the forward slash as indication.

Every file can also be referred to by a relative pathname based on the concept of the current working directory of the user. When users log in, they are placed in an area of the operating system that is their home directory. From this location they can move about to other directories that they are allowed into by using the cd or pushd command. Each time they move, their current working directory is updated to indicate their new location.

From any directory you can refer to files at other locations using their relative pathnames. For example, if you are in your own home directory, called /usr/ home/ boogeyman, and you want to access a file in another location, such as /usr/ home/ timmy/ brains. txt, you could type in the full path name to get there, or you could abbreviate it. Since both boogeyman and timmy are subdirectories under /usr/ home, from the boogeyman location you can refer to the parent directory and then the timmy subdirectory and file as ../ timmy/ brains. txt. You can use this combination parent directory name ".." and subdirectory name to refer to every other location in the file system.

Every file contained in a directory is said to have a hard link from that directory. This means that in the file system structure, there is an entry under that directory that points to the location of the file on the disk. A single file can have hard links from any number of directories. When each link is removed, that file can no longer be accessed from that directory, until all the links are gone, and that file is essentially considered deleted. UNIX also allows you to create soft or symbolic links from one location to another. In a soft link, only the name of the other file is stored and not all its file information. This makes it convenient to access the file from another directory, rather than having to refer to its full or relative pathname every time. Since it is referred to by name only, the file in its actual location can be removed and replaced with another and the link will remain. If the file is removed, the link will continue to point to it, but when users try to read or write to that file, they will get an error saying that it does not exist. A file or directory can have any number of soft links, but only the hard links matter when it comes to accessibility. If all the hard links for the file are removed, the file is considered deleted, and there is no way to get it back. If all the soft links for the file are removed, it has no effect on the original file itself.

Every file has a certain set of attributes that go along with the raw data contents. These attributes include the file type, the number of hard links, the file size in bytes, the device on which it is located, the inode number, three different time stamps, the user and group identifiers, and the permissions flags. UNIX supports a number of different file types that designate how the file is accessed and used. This is independent of the actual contents of the file itself. These types include regular files, directories, symbolic links, device driver files, and queue files. Within the code, these files are accessed differently and have different properties. The inode number describes where the file is located on a hard drive according to the file system structure. The time stamps record the last time it was accessed, the last time it was modified, and the last time its attributes and time stamps were changed. The user ID, group ID, and permissions flags are explained in the upcoming section on security.

Special Files on UNIX File Systems

There are several types of files that are not used to contain directories, data file contents, or symbolic links. These special files are used as interfaces to device drivers or queues. They look and behave like files but are implemented differently internally.

Hardware I/ O devices in UNIX are mapped to file identities to create a logical view of each device that programs can manipulate like files. Each device is identified by the kernel with a pair of numbers, the major device number and the minor device number. The major device number identifies one class of devices by the driver that supports it. For example, pseudoterminal devices in Linux map to the major device number 2, and real terminals to the number 3. This number is used as an index to a table of all device drivers.

The minor device number identifies a specific instance of the device as a logical entity. This means that two physical terminals that both use the same type of device driver (e. g., major devnum 3) would need two separate minor devnums, say 1 and 2. Thus, once the driver is located with the major devnum, the program can use the minor devnum to pick out exactly which instance of the device it needs.

One last distinction between these logical device driver files is that there are two main types: the character device file and the block device file. They differ based upon how you access individual data items from them. A character device file allows you to write or read one character at a time from the device. This is common in serial interfaces and terminal devices. The block device reads and writes whole blocks of data at a time, as in hard drives and network interfaces.

Pipes, or first-in, first-out (FIFO) queues, are special files used to communicate between processes. Essentially, the queue acts as a file-based buffer that one or more programs can write to, while others can read off of the queue and then process the data accordingly. File-system-based FIFOs are not used as much since the adoption of the sockets-based system for interprocess and intersystem communications. In fact, in BSD systems today, the FIFO file is just a file system interface on top of a sockets interface.

Structure of UNIX File Systems

The method of supporting disk systems does vary across UNIX versions in specific areas, but the general concept is similar in all of them. The popular UNIX file systems in use today include ufs (the Berkeley Fast File System or UNIX file system), s5fs (the System V file system), vxfs (the VERITAS log file system), advfs (the Digital UNIX file system), and ext2fs (the basic Linux file system). Most UNIX systems now use the ufs file system, although the specialized ones, such as advfs and vxfs, are more commonly found in high-end systems, and the open-source ext2fs in versions of Linux. A comparison of the different features of each is listed in Table 1.2.

Each UNIX file system is mounted under a different location in the over-all system file tree. Any partition can be mounted under any directory, with the exception of the root directory. These mount points that the file systems go under replace the contents of that directory with the file system's own top-level contents. In essence, what you have is a tree of file systems, each of which could theoretically be its own subroot.

Every UNIX drive is viewed as a set of partitions, or physical locations that can each be a separate file system volume. The sizes of these partitions vary, but in most cases, the smallest is the minimum disk size supported by a particular file system and the maximum is the size of the entire disk itself. Each partition is looked at as a sequence of blocks that can be 512, 1024, 2048, 4096, or 8192 bytes long. Each block is the smallest possible unit of storage for the file system, and hence also the smallest possible real file size. The difference between the real file size and the amount indicated by the file system is that each file constitutes a sequence of blocks, and when a file is smaller than the block size of the file system, then a whole block is still allocated but only part of it is actually used. The rest of that block is empty for all purposes and available for later use if the file is added to, or simply wasted if not. Thus, with a larger block size, storing lots of files that are smaller than the block size will result in lots of wasted space on the disk. On the other hand, using larger block sizes makes it easier to retrieve large files, since fewer blocks will be used to represent the whole file.

The block is simply an abstract notion. On the disk itself, this is translated to a certain disk cylinder, track, and sector, where the information is stored. The disk itself may store data in sizes other than the logical block size. Most disks in the United States and Europe have a hardware block size of 512 bytes, while those in parts of Asia have 1024. Mapping a logical disk to a multiple of the hardware block size makes for the least amount of wastage.

Every partition contains at least four distinct areas. The first is the boot area, and it contains information on how to boot the system from a file stored somewhere on that disk. This, of course, is used only when that drive contains a bootable system. It is also referred to as the master boot record on DOS and Windows machines. The second area is called the superblock, and it contains information on where all the free blocks are, how many blocks there are in total, and now many files there are. There are now usually two superblocks on UNIX file systems so as to keep redundant copies of this very important information. The third area contains the index nodes or inodes, which each point to a different file. Each inode in UNIX is typically 64 bytes long and contains metadata about the file itself (its owner, permissions, and all other attributes of the file) and an array of addresses to the data blocks that contain the file itself. The last area of the partition is the set of blocks where the actual data for each file is stored.

Each UNIX file is considered a chain of blocks. They don't have to be in sequence, but the inode for that file has to know where they are located. The original inode of the System V file system has 39 bytes that point to these data blocks. Each data block is addressed by a 3-byte number, and thus this block array points to only 13 different data blocks.

If those were all the pointers to the data blocks, it would certainly limit the size of files down to 13 times the block size, or 6.5-to 104KB for 512- to 8192-byte blocks. Instead, the design uses the last three blocks to point to other arrays of blocks. Block address 10 points to another 256 block addresses. Block 11 points to another 256 blocks, each of which, in turn, points to 256 blocks (giving 65,536 blocks). Finally, block 13 points to 256 blocks of 256 blocks of 256 blocks (about 16 million). This single, double, and triple indirection to more blocks allows the file to grow to huge sizes. At the same time, very small files can still be efficiently stored in a handful of blocks. Each level of indirection also makes it slower to access all the data of a file, but the performance hit is minimal compared to the amount of storage it supports.

The ufs file system supports the larger block sizes of up to 8192 bytes, but to avoid wasted space, it also allows these blocks to be broken into fragments as small as 512 bytes. This fragmented information is stored in the inode just as is the block address information. Ufs also bypasses the limitations of s5fs that limit filenames to only 14 characters in length, by allowing a variable name length for each filename, up to 255 characters.

Organization of the UNIX
File System Environment

As I said, the UNIX file systems are laid out like an upside-down tree with the root at the very top. Below this are the top-level directories where important system files are stored. In almost all UNIX systems these include the directories called bin, dev, etc, lib, tmp, and usr. The usr directory also has its own subdirectories which play an important role: /usr/ bin, /usr/ sbin, /usr/ lib, /usr/ libdata, /usr/ libexec, /usr/ etc, /usr/ include, /usr/ local, /usr/ share, and so forth. In many modern versions of UNIX there are also several other top-level directories such as home, opt, proc, sbin, and var. There used to be strict separations for what is contained in each of these directories, but in most cases today, they overlap to some degree (see Table 1.3).

The /bin, /sbin, /usr/ bin,
and /usr/ sbin Directories

These directories contain most, if not all, of the system and user applications. The sbin directories are reserved for applications that pertain to the run of the system. These can be seen by the user, but many of the important programs have built-in checks to ensure that only privileged users are allowed to access them. There are simply multitudes of programs in these directories, and they vary significantly between UNIX versions, so I won't try to name them individually.

The /usr directory at one time was set for user accounts and applications, but now pretty much maps the rest of the system itself, containing application programs and scripts, application libraries, documentation, program header files, source files, and even temporary files, all stored under subdirectories. User-installed applications now normally go under the /usr/ local subdirectory.

The /etc Directory

Most of the important system configuration files are stored in /etc, but with the variations across UNIX, the actual important files differ. Several common files and directories are evident in most systems, however, and are shown in Table 1.4.

The rc. d subdirectory needs particular attention. This is where the scripts executed during a transition from one system init state to another are kept. In BSD UNIX systems this is usually not evident, or the scripts are combined into one or two files stored directly in /etc, called rc, rc. local, or rc. sysinit. In System V and Linux systems, each system state has a different subdirectory under rc. d, called rc0. d, rc1. d, all the way to rc6. d, containing the various scripts. These scripts have a specific format. The first character is either S, to indicate that a subsystem is about to be started, or K, to indicate that the subsystem is to be killed. This is then followed by two numbers, the first indicating the first-level priority to execute the script, and the second, a second-level priority. Finally, this is followed by the name of a subsystem or server application process. Within the script is information on how to start or kill that subsystem or server application process. Theoretically, this gives only 100 different combinations each of start and kill scripts, but there has almost never been a case when more have been needed. All of these scripts are written in the Bourne shell to maintain the highest level of compatibility across all the init states.

The /dev Directory

This is the logical device drivers directory. It contains all drivers that map to hardware devices, and may have more than one logical mapping for the same hardware device. Multiple instances of a device keep separate unit numbers (or alphanumeric sequences) for each file following their actual device abbreviation. For example, two common groups refer to physical terminals (ttys) and virtual or network terminals (ptys), and each configured terminal has an associated terminal number, such as tty01 or ptyx1. When users log in they are assigned one of these terminal devices so that they can communicate with applications trying to get input from or send output to them. Each of these devices is a special device file with its associated major and minor device numbers.

UNIX Network File Systems

Several different network file-sharing and access systems are available for UNIX machines. Of these the Network File System (NFS), the Andrew File System (AFS), and the Distributed File System (DFS) are the most common. NFS and DFS are mostly intended for local area networks (LANs), whereas AFS was specifically designed for the wide-area network (WAN) environment. A network file system is a much easier way of working with files than using a file transfer system such as FTP. The files seem to be the same as any other files on the local disk drives, when in reality they are actually stored on a server elsewhere. Thus, they can be manipulated by application programs and users like any other local disk-drive-based file.

The Network File System, the conceptual granddaddy of most file-sharing systems between machines, emerged from Sun Microsystems back in 1985. For the first time, it was possible to access a file system on another machine as if it were running directly on your own. It was slower, having to run over an Ethernet link rather than the internal disk drives of the computer, but it worked, and worked well enough to still be around today. It allowed computers to share disk space at a time when it was still very costly, and mostly for server computers. NFS is now supported in nearly all UNIX systems, as well as Windows, Macintosh, Open-VMS, and a number of other operating systems.

NFS works between a client computer that accesses the file system and a server computer that shares it out. It communicates between the two using another creation of Sun Microsystems, the Remote Procedure Call (RPC) mechanism. This programming mechanism allows applications to execute code on other machines across a network, as if they were part of the same local application. NFS is a stateless protocol, meaning that the client and the server do not maintain a live connection at all times. The NFS client does not really open a file for reading and writing. Rather, it only takes pieces at a time when it needs them. When a client needs to read a file or a portion of the file, it sends the requested filename and the number of bytes to read. The server responds by sending only the requested data, rather than the whole file. This allows the client to read the file a portion at a time, making it more interactive. Writing to files on NFS-mounted volumes can be implemented in two ways. In the first method, the clients do not care and simply send updates to the server. The last client to send the update will put its info into the file. The second, safer method requires a file-locking mechanism, so that when one client has the document open, other clients are locked out until the changes are complete.

The NFS server exports a directory or file system to a defined list of clients for read-only or read-write access. This is usually entered into a file called /etc/ exports, /etc/ nfsexports, or /etc/ dfs/ sharetab manually or by using a command such as share or share_ nfs. For example:

/home/ rawn -rw= client1: client2: server2
/home -ro all
/u2 -maproot= root server2

The actual formats for this file vary according to the UNIX version, so consult your own man pages for correct information.

The NFS client then mounts this exported "file system" onto a directory or drive location of its own, using the command mount, nfsmount, or mount_ nfs as the system administrator account. For example:

# mount_ nfs -o rw, timeout= 600, retrans= 7 server1:/ home/ rawn
/home/ rawn/ server1

The preceding command mounts the exported file system /home/ rawn from server1 onto the current machine placed under the directory /home/ rawn/ server1. Once mounted, all the files under the other directory appear in directory listings on the client machine.

NFS is implemented with the help of the Virtual File System (vfs), which maintains a separate mapping on top of a real file system such as ufs. The mapping defines a uniform network view of the file system with defined sizes for read and write operations. It basically separates the specifics of the real underlying operating system with something that all other network clients can agree on.

Networking in UNIX

Networking has been a part of UNIX since the mid-1970s, and in fact the first implementation of the Transmission Control Protocol/ Internet Protocol (TCP/ IP) family of protocols was on UNIX machines. Thus, the Internet has grown up around UNIX servers, and the platform still proves to be the mainstay of small and large Internet businesses alike.

In fact, TCP/ IP was so ingrained into UNIX that until the arrival of the AT& T STREAMS driver model, it was fairly difficult to add other protocols. STREAMS and most device driver systems in UNIX now allow layering of all protocols. This allows other non-IP-based protocols to run alongside the system. For programmers, the creation of the Berkeley sockets model has resulted in one of the best and most well-known libraries for networking.

A more detailed discussion of the TCP/ IP protocol family is available in Chapter 4.

UNIX TCP/ IP Applications

There are a number of common TCP/ IP user applications on UNIX, including telnet, ftp, rlogin, rsh, rcp, rexec, ping, and traceroute. TCP/ IP machines, and the File Transfer Protocol (FTP), used to transfer files from one TCP/ IP machine to another.

The Remote Login (rlogin) tool is similar to Telnet but uses a different protocol for connecting to systems and supports an additional authentication system. There are a family of these applications including rlogin, rcp, rexec, and rsh. These are commonly found only on UNIX machines. With these applications it is possible to designate trusted hosts that are freely allowed to connect to the UNIX machine and execute commands on behalf of the user.

There are two ways to implement this trust relationship. First is a single file called /etc/ hosts. equiv that contains entries of the trusted hosts with a plus sign ( ) before them. The second method involves users creating .rhosts files in their home directories that indicate which hosts they trust to execute commands as their local accounts. Both systems can bypass password checking, and simply verify that the account name and the hostname match those on the UNIX system.

This entire trust system is fairly dangerous because of IP hostname spoofing. Crackers set up a machine with the exact same name as that indicated in your .rhosts or /etc/ hosts. equiv file and then attempt to execute remote commands on your server based upon the trust relationship. Even if you are not connected to the Internet, this isn't a very safe environment to maintain. It's convenient, but unsafe.

The commands rsh and rexec use this same system to execute single line commands on a remote host. The only difference between the two is that rsh invokes a full shell session with the user's personal configuration, environment variables, aliases, and so forth, while rexec only executes the command line as is. The command rcp is a remote copy utility that works very much like the standard UNIX cp command except that each filename is normally preceded by username@ hostname: before the filename or file path, where username and hostname match the host you are copying to or from.

The ping and traceroute programs are used to check the status of network hosts and routes. The ping command sends a message to a designated remote host using the Internet Control Message Protocol (ICMP) and determines if the host is reachable. Another way to use ping (ping -v hostname ) sends continuous 64-byte packets to the remote host and calculates the latency (speed) of the network path to that host. Some ping packets may be lost on the way, due to the unpredictable nature of IP packet delivery. So, when you can, use the verbose ping mode; it will summarize the statistics, indicating the average ping time and the percentage packet loss to that remote host.

The traceroute command sends User Datagram Protocol (UDP) packets to each machine along the network path to the remote host. This will show all the network routers, gateways, and hosts-- simply called hops-- that the packets have to travel through, along with three sets of latency responses from each hop. It's a nice way to determine which networks your packets are traveling through and is especially handy for debugging, if you cannot connect to the remote end for some reason.

Routing in UNIX

Routing protocols began on UNIX servers, and it is no secret that some of the best routing hardware out there uses a modified version of UNIX as its central operating system. It is easy to configure UNIX machines as firewalls, gateways, or proxy servers using freely available packages.

Static routing is included with every UNIX system. To look up entries in the system routing table, you can use the netstat -r command, which lists all incoming and outgoing routes from the machine. At the very least, with just one network interface card, the table contains three or four default entries to refer to the local machine and its default outgoing route. For example:

Destination Gateway Genmask Flags irtt Iface
192.168.20.22 * 255.255.255.255 UH 0 eth0
192.168.20.0 * 255.255.255.224 U 0 eth0
127.0.0.0 * 255.0.0.0 U 0 lo
default 192.168.20.1 0.0.0.0 UG 0 eth0

This shows, first, a route for the machine to contact itself through the Ethernet interface (eth0); next, a route to other machines within its LAN; third, an internal or loopback address to itself without going over the Ethernet; and fourth, the default outgoing route to all other machines not within the same LAN. The netmask in the second item would read 255.255.255.0 if this was a Class C address block of 255 addresses. Instead, this is a /28 classless interdomain routing (CIDR) block with only 32 addresses, and thus the alternate netmask is 255.255.255.224, indicating that only the addresses from 192.168.20.0 through 192.168.20.31 are valid for this address.

Every UNIX system includes the route command to set up static routes outgoing from the machine. The route command usually follows a similar format across versions, but there may be additional options or variations on the syntax required on your particular UNIX version. To add a new route to another network, you need the network ID of the destination network and the host IP address of the gateway between your UNIX machine and the other network. For example:

# route add -net <destination network> <gateway host>
# route add -net 192.168.30.0 192.168.20.253

To delete a route, it is usually the same command with the command del instead of add, and the gateway address is optional. I used IP addresses instead of hostnames here, but it works just the same.

Remote Access in UNIX

Remote access in UNIX systems shows a good deal of variance across the versions. This is primarily because terminal access and management systems are so different. Every remote access point is associated with a terminal or pseudoterminal device.

The standard remote connection protocol used on most UNIX systems is the Point-to-Point Protocol (PPP). Most UNIX systems can be set up as PPP clients or servers. As a client, the system uses a remote dial-up application to connect to the PPP server at the other end, and then passes the remote user account and password. The dial-up application varies from platform to platform, but common ones include Kermit, Chat, and Minicom.

The system then sets up the default route to go over the PPP interface (typically called ppp0) so that all traffic is sent out through the other end. Once the connection is established, the UNIX client is ready to send traffic for any local user.

A UNIX machine can be set up as a PPP server either by having modem ports directly connected to its serial interface or by attaching to a terminal server system. The directly attached modem system requires that each physical modem serial port be configured as a logical tty port on the UNIX server. Thus, when users connect to the modem, they are put right at the login screen for the UNIX machine.

The second approach, with the terminal server, allows the UNIX server to connect to modems over a LAN interface. The terminal server either contains all the modems within itself, or are attached to them through cables. The terminal server acts as a proxy for the UNIX machine and performs login authentication by passing information back and forth between the client and login server sides. To do this, there is a standard protocol known as Remote Access Dial-in User Service (RADIUS), that both the terminal server and the UNIX machine use to exchange information. This protocol allows the UNIX machines to be placed anywhere else on the network, even in a separate LAN or WAN from the terminal server, to provide the authentication. The benefit of this is that you can set up remote access servers in different locations or cities and have them all communicate with a single UNIX server at the headquarters. It makes it easier than managing separate user accounts at each location. For each incoming modem connection to the UNIX server, you have to define the logical device. This means defining the properties of a terminal device and/ or its network access properties. Direct modem connections are simply seen as serial port interfaces on the machine. On many UNIX servers, you need to define the port speeds and serial connection attributes in the file called /etc/ gettytabs or /etc/ ttydefs. This can be a very cryptic file with many two-letter codes, or special abbreviations for each of the attributes. It is particular to the UNIX variant you use, so you should look up the man pages for explanations.

UNIX Security Systems

UNIX has a long history of both security breaches and strengths. The prime reason for the breaches is that UNIX is so widespread, and many UNIX system owners simply do not put any effort into maintaining security. Every few weeks or so a new patch to different versions of the operating system comes out when yet another security hole is found.

The good thing is that there are many fewer ways to actually bring down or crash the operating system. Most of the security features for UNIX were developed over its 30-year life span, and this continues to be the spirit. It is still used in some of the most secure systems in the world, which is always a good sign.

UNIX security is built on the concept of users, groups, and file permissions. Every user and group has a simple integer ID value that is compared to check on the user's right to access the data. This information is stored in the passwd and group files, along with other account information.

The only really privileged account supported in UNIX is the root or super-user account (user account ID 0). This account has total access to all areas of the system. Other accounts can be given similar privileges if they are also included in the superuser group ID 0. However, some applications specifically check that the account ID is for the root rather than the group ID. Thus, to perform any system administrator command, the user has to have access to the root account. This can be achieved with other sysadmin tools such as slide and su that change the user ID into the root ID. The user accounts have to be configured to be allowed into the group that can execute this application, or they have to have the root password at hand. In either case, this still goes through the root account ID. Each user's login session runs in a different area of memory from others, and the only ways a user can access another user process's memory are either by being root or through the shared memory system. Otherwise, there is no chance that one user's memory will overrun another's, a problem in Windows and Macintosh systems that still have some memory areas that allow one process to interfere with another.

Since most objects are treated as files, the access is compared to a set of file permission bits. Each file has an owner and a group membership. Looking at the following directory entry, you can see these permission bits listed in the very first column.

-rwxrw-r--1 rawn staff 27136 Jul 12 12: 16 webcache2. doc

They come in three sets of three letters designating a particular bit: r designating the ability to read, w designating write, and x designating execute permissions. The very first bit in the entry, shown simply as a hyphen, indicates the file type-in this case, a normal file. Other file types include directories (d), symbolic links to other files (l), character device drivers (c), and block device drivers (b).

The first set of 3 bits after the initial bit constitutes the owner permissions. The owner of this file, in this case rawn, can read, write, and execute this file at will. The owner can also naturally change the ownership or the permissions of this file. Executing the file does not always do something; it depends on whether the file itself is a script or a program, or simply contains raw data. The second set of bits refers to the permissions for the group (in this case, staff), and the final 3 bits indicate the permissions that everyone else (the World) on the system has to that file.

One other triplet of bits, not shown here, constitutes the mode bits. Two of these allow other users to impersonate the owner or the group that the file belongs to, called the set user ID (SetUID) and set group ID (Set-GID) bits. These come in handy for executable programs mostly. Each process has its normal user and group IDs assigned to its account. In addition, it also has an effective user or group ID that indicates its current disposition. By setting an executable program file to be SetUID, and setting the World read and execute permissions, anyone else can execute the file as if they were the owner themselves. This comes in handy when an application needs to perform a particular action as the root user but is not to be given full access by being given the root account. Any SetUID application owned by root has to be handled with extreme caution by sysadmins, since it might possibly be a Trojan-horse program trying to break into the system.

The same scenario works for the SetGID bit, but it also serves a second purpose in some UNIX versions. If the SetGID bit is on but the file is not set to be executable, then that file is marked as having mandatory file locking. In other words, when one user is accessing that file, no others can write to it and change the data. This comes in handy in database systems.

The last bit from this triplet is called the sticky bit and works differently for directories and files. For directories, it indicates that the process that has access to a file in that directory has to match to the same user ID. This allows the process to remove or modify that file in the directory. Such directories usually are set World writable so that anyone can create files in them. One such use of the sticky bit can be seen in the /tmp directory.

For executable files, the sticky bit designates that the program, once loaded into swap memory, should be left there even after the user exits. This makes it faster the next time around, since the program is already sitting in memory and does not have to be loaded again.

Unfortunately, basic UNIX systems do not come with an access control list (ACL) system identifying exactly which other users are allowed access. They rely more on the limited functionality of groups instead. More advanced Secure UNIX systems are available from a handful of vendors. These systems are mostly built according to criteria defined by the U. S. Department of Defense. Such systems have different ratings, ranging from A1 (for top-security systems) to D (average systems). Most UNIX systems can be certified to be rated C2, which has higher than normal security features, such as access control lists, completely erasing and rewriting over deleted files, completely erasing memory pages no longer used by processes, and so on. A very select group of UNIX products from IBM, Compaq/ Digital, Sun, and SGI have received the B2 standing, an even higher rating, that involves both system and physical security.

UNIX User Interfaces

UNIX systems come with a variety of user interfaces, both graphical and textual. Unlike the Microsoft Windows graphical environment, UNIX user interfaces are user-mode applications and do not run within the kernel environment. They do communicate with drivers for input and output devices such as the graphics adapter, keyboard, and mouse, but these are accessed through standard library calls. Each user interface also runs within the context of a user's login session and is not tied down to a single system account. Even graphical applications run within one user's environment, within the user mode.

All this allows UNIX user interfaces to operate independently of the kernel and other users. Thus, if one user's interface crashes for some reason, it does not lock out the kernel or any other user, allowing the system to run without interruption.

The Shell Environment

The traditional interface to UNIX is the command-line interface known as the shell. Once users are logged in they are presented with a command line to execute any UNIX command. All UNIX commands in this respect can be started from the command line, and many execute as text-only applications.

The original command-line interface, called the Bourne shell (after its creator), or simply sh, still exists and is the lowest common denominator for shells in all UNIX systems. This was later superceded by the C-shell (csh), which offers more command structures to create better shell scripts. The KornShell commercial software was released by AT& T as another improvement over sh and is also available in a free software form, known as the Bourne Again Shell, or bash. Each of these shells provides basic programming language constructs that allow a sysadmin to quickly write a script to perform complex tasks with the help of simpler UNIX command-line tools. The syntax actually differs for each, but the context is the same.

Shell Script Programming

Creating a scripting language as part of the shell is one of the great things that the originators did with UNIX. What's more, the scripting languages are capable of performing any task of the system, some without having to write any real programming-language code whatsoever. The difference between a script language and a programming language is that scripts are usually interpreted and do not require compilation for a given platform. This means that they are more portable between hardware platforms, being able to run on any of them that have the same script environment. On the downside, they normally perform slower than compiled code.

Most UNIX shells have scripting language constructs, even the ancient and limited Bourne shell. The C-shell, KornShell, and Bash all have language constructs very similar to those in full programming languages like C. This includes constructs such as the if.. then.. else, while, do.. until, switch, control to run blocks of code to perform actions. They also support procedures and functions, and abstract data types such as arrays, lists, stacks, and even pointers. They don't go the full road to object-oriented scripting, but there are other non-shell-based scripting languages, such as Perl, Python, and TCL, that can be called from shell scripts to do so.

As part of the shell, environment variables provide a way to store information that can be reused by other applications, or shell scripts for use during the user's login session. These variables are normally nontyped, or simply strings of characters that have to be interpreted elsewhere. For example, the MAIL environment variable contains the filename location of a user's mail folder, and LOGNAME is the user account name that the user logged in as. These variables are also used as part of the laguage constructs to create scripts.

There are three standard file descriptors associated with every process: stdin, stdout, and stderr. These define the standard input, output, and error-output file streams, and allow users to combine several command-line tools together. Stdin is basically the input stream into a command-line tool, so anything that you have to type into a text application goes in through this file stream. Stdout is just the opposite; all output from the application comes out through this file stream. Stderr is similar to std-out, but is a special file stream to send only the error output of a file to.

You can also redirect stdin, stdout, and stderr into other files so that you have a record of what occurred. These are often presented in slightly different forms depending upon the shell, but in most cases they use the less-than ( ) and greater-than ( ) signs for stdin and stdout respectively. Stderr is sometimes represented as the combination of characters $ in the C-shell and Bash.

It is also possible to tie the output of one command-line tool to another using what is called a pipe. Normally represented by the vertical bar character (|), it takes the stdout of one application and sends it into the stdin of another application while executing the second application, as well. This makes it possible to perform complex tasks by tying together several simpler command-line tools, and simplifies tool building, as well. For example, by taking the output of the command uptime, and piping it into the stream editor sed, you present the current load on the system. You don't even have to rewrite the code for the system uptime command to reprint the information this way.

% uptime | sed -e 's/^.*, 1/ L/ 'f
Load averages: 0.24, 0.36, 0.31

It seems like an obvious addition now, but piping together several commands was a revolutionary way of building and using applications. It is part of what makes some shell environments so powerful as system scripting tools.

The X Windows System

In the later 1980s, a group at the Massachusetts Institute of Technology (MIT) known as Project Athena began working on a graphical user interface (GUI) that came to be known as the X System. This GUI system creates a network representation of a display and input and output devices, and allows a client application to be run from anywhere on the network. The preference is for LANs in particular, but X Windows can work over WANs as well, albeit slower.

It was the first system that allowed UNIX users to run cheaper work-stations with less memory and processing power, while the actual computation of the application was run on a more powerful, expensive UNIX server system over the network. The workstations then cost only about $20,000, but they didn't have enough processing power to run the full application in their limited 2-to 8MB RAM, and some didn't even have hard drives. The servers, on the other hand, reached hefty prices of $50,000 to $500,000, boasted 64MB and more RAM, and could store almost a gigabyte on hard drives. Amusing in retrospect, huh?

Today we don't have such limitations on our desktop hardware anymore, but there is still the notion that managing 1 server is much simpler than managing 100 workstations, and thus X Windows still plays an important role in UNIX computing. The three sides of the X Window equation are the X server, the Window Manager, and the X client application. The X server handles the input and output device communications and the display of application windows. The Window Manager provides a means to manage the various application windows more easily, allowing the user to move them around the screen, iconize them, or resize them. The X client application does the actual processing of the application itself.

The funny thing is that this is all backward from the normal point of view of client/ server networking (see Figure 1.2). The X server is normally run on the desktop machine, whether it is a UNIX workstation, a PC desktop, or an X terminal. It is called X server since it accepts incoming connections from other machines and does the hard work of graphical display processing. The X client application, on the other hand, is normally stored on a network server system that has the heavy computing machinery to do the application proc essing. The Window Manager is even more fickle and can be run from either the desktop machine, the network server, or even a completely different machine.

In Figure 1.2, we can see several desktop machines of different types accessing applications over the network from two servers. Two of the machines are UNIX workstations, one of them an X terminal and the other a Windows PC running X Windows software. An X terminal is a specialized desktop machine that runs X Windows and nothing else. It can only run X client applications from a network server, but does it very fast. Each user logs onto a network server and then launches an X client application to be displayed on the X server desktop. It shows the various combinations in which applications can be run through X Windows. User Bob is running two different X client applications, one from each server. Joe, on the other hand, is running one X client application from his own workstation and another from the second network server. Mike's Windows PC is running an X client application from the second network server, as well as a local Windows application directly on his machine.

X client applications are written specifically for the graphical environment of X, just as Windows applications are written for the Win32 graphical environment. At the very basic level, X Windows provides the application libraries that draw the images, position them, move them around, and define how they interact with each other through cut-and-paste. On top of this, another set of application libraries draws the graphical objects-- known as widgets-- such as scrollbars, icons, menus, dialog boxes, and so forth. The Window Manager presents a defined style as to how these widgets are presented and how they communicate with one another.

Window Managers bring variety to the X Windows system. There are two dozen or more different Window Managers out there including Common Desktop Environment (CDE), Motif (mwm), OpenWindows (olwm), Tom's Window Manager, Free Window Manager (fwm), GNOME (gnome), K Desktop Environment (kde), and so forth. There is a subcategory called virtual window managers that extends the standard system to include multiple virtual desktops that map onto the same physical screen. Usually a small box shows a map of the various virtual desktops and the windows within them, and you switch between desktops by either scrolling past one side of your screen or clicking within the virtual desktop box. This gives you more screen real estate to place windows side by side without cluttering your view. Users of Microsoft Windows and Macintosh can't understand how you can have different Window Managers and still be able to use them properly, but the fact is that many UNIX users do. They configure their Window Managers to their liking and most often stick to them even across machines. The variety simply gives them choice.

The X Windows code has been surprisingly stable, at least for the protocol. The most current version of the protocol is X11 Release 6.3 (X11R6.3). Almost all the versions from X11 Release 3 through 6.3 can still work together. The newer versions build on the features of the earlier ones, which allows this compatibility. Although there are a number of different X servers out there, almost all still adhere to the same X11 protocol, making them compatible with each other despite the differences in OS platforms and hardware capabilities.

Launching the X Windows environment usually involves running the program xstart or startx, which then goes on to load the environment based upon the system properties. X server products these days are optimized for specific graphics adapters, thus requiring the sysadmin to install and configure the system properly before the users can run the environment. It is also possible to always leave a UNIX workstation in X Windows, by setting up the X Display Manager (XDM). This installs a login window for users to get onto the UNIX workstation and then switches over to their desktop environment. When they log out it goes back to the login window. With XDM, users may never see a shell environment running on the desktop if they use only the graphical applications.

There are many common applications that either come with X by default or are available from the Internet. Since these applications have to be written to use X Windows, they run differently than command-line tools even when they share similar functions. For example, every UNIX server with X installed has the xterm application, which pops up a shell environment within an X window. This does actually run a shell on the UNIX server, but it also has other properties, such as a scrollbar, fonts, colors, and so forth. Other common applications include xclock, xlockscreen notifier), xload (a system load meter), and so forth. There are a number of classic applications that have been available for free for quite some time, including xemacs (a powerful text editor), xv (a graphics file display/ editor/ convertor), xmh (a mail reader), and xrn (a Usenet newsreader).

The downside of X Windows is that it has limited security. First, users have to be able to log into the network server and have access to run the X client application, and have it displayed on their X server. The X client, normally running on a UNIX server, looks for an associated environment variable named DISPLAY, which specifies the IP hostname of the machine and screen-- desktop X servers can support multiple monitors and screens-- that the application should be displayed on. Alternatively, users can launch the client application and specify the display name as an option to the launch command. On their desktop X server, they have to specify that it will accept connections from the network server that hosts the X client application. This is done using the xhost command. For example, xhost + server1. straypackets. com allows any connections from the indicated server, whereas xhost -server3. straypackets .com disallows any connections from the indicated server. Many users also execute the remote shell command rsh, to start up their X client applications without having to log in to the server. This relies on the server trusting the user's desktop machine rather than performing an authentication each time. This is also one of the first places that crackers look for to break into computers.

Normally, there is no encryption of the information on the X11 wire protocol. There is some security in the fact that the wire protocol itself contains only updates to the screen and graphical information, which makes it hard to get the full picture of the environment unless you try to capture all the network packets from the very beginning of the connection. It is possible to use network-level encryption such as Virtual Private Networking or the IP Security protocol to establish this wire-protocol-level security, however.

Summary

The UNIX operating system was the breaking ground for many of the innovations seen in other operating systems today. In fact, what we call UNIX today is an amalgamation of many different research and development efforts. It provides a firm multiuser kernel-based platform from which all system services and user applications can run. This concept, where everything is built above the core elements that maintain the operational status of the machine, is what makes it one of the most resilient operating systems around.

There are many variations on the UNIX platform depending on which vendor you go to, including one popular system, Linux, which looks and acts like UNIX but is not derived from any of the original code. The source of development and support for these various UNIX systems comes from their respective vendors. Although concepts and even tools look similar, there are changes that can make trouble for even seasoned system administrators.

As we look at the UNIX system, it continues to undergo change. The biggest recent change has been the switch from 32-bit to 64-bit UNIX systems from vendors Sun, IBM, IRIX, and HP. This is primarily a technical and implementation difference, rather than a functional difference, which is why I don't really mention it. On another front, the ufs file system has served well for most purposes, although it is now being replaced by several other journaled file systems. Similarly, NFS continues to reign supreme, although it, too, is being replaced by more advanced distributed file systems. The kernel has transformed from the once-monolithic model to microkernels and multithreaded kernels. The shell and X Windows environments continue to be the core interfaces for UNIX systems, the shell providing a text and command-line-based environment, and X Windows providing a networked graphics system.

The differences between UNIX and Windows are apparent when you start looking at each of these pieces separately. Although Microsoft has been more open to using standardized protocols for network services, its motto of Embrace and Extend causes lots of compatibility problems with existing services running on UNIX. Look to the next six chapters to see what I mean.

Meet the Author

RAWN SHAH is an NT and UNIX administrator who has written extensively on PC to UNIX connectivity. He created and founded NC World magazine as well as helped in the creation of JavaWorld and Windows TechEdge magazines. He is an Executive Board member of the IEEE Computer Society Task Force on Cluster Computing

Customer Reviews

Average Review:

Post to your social network

     

Most Helpful Customer Reviews

See all customer reviews