- Shopping Bag ( 0 items )
What is protocol analysis? A protocol is defined as a standard procedure for
regulating data transmission between computers. Protocol analysis is the
process of examining those procedures. The way we go about this analysis is
with special tools called protocol analyzers. Protocol analyzers decode the
stream of bits flowing across a network and show you those bits in the structured
format of the protocol. Using protocol analysis techniques to understand
the procedures occurring on your network is the focus of this book. In my 10
years of analyzing and implementing networks, I have learned that in order to
understand how a vendor's hardware platform, such as a router or switch,
functions you need to understand how the protocols that the hardware implements
operate. Routers, switches, hubs, gateways, and so on are simply
nothing without the protocols. Protocols make networks happen. Routers and
other devices implement those protocols. Understand the protocol, and you
can largely understand what happens inside thebox.
A Brief History of Network Communications
For years, complex processing needs have been the driving factors behind the
development of computer systems. Early on, these needs were met by the development
of supercomputers. Supercomputers were designed to service a single
application at a very high speed, thus saving valuable time in performing
Supercomputers, with their focus on servicing a single application, couldn't
fully meet the business need for a computing system supporting multiple
users. Applications designed for use by many people required multiple
input/output systems for which supercomputers were not designed. These
systems were known as time-sharing systems because each user was given a
small slice of time from the overall processing system. The earliest of these systems
were known as mainframes. Although not as fast as supercomputers,
mainframes could service the business needs of many users running multiple
applications simultaneously. This feature made them far more effective at
servicing multiple business needs.
The advent of mainframes thus led to the birth of centralized computing.
With its debute, centralized computing could provide all aspects of a networked
communications system within a tightly controlled cohesive system.
Such systems as IBM's S/390 provided the communication paths, applications,
and storage systems within a large centralized processing system. Client workstations
were nothing more than text screens that let users interact with the
applications running on the centralized processing units.
Distributed computing followed on the heels of centralized computing.
Distributed computing is characterized by the division of business processes on
separate computer systems. In the late 80's and early 90's the dumb terminal
screens used in centralized computing architectures started to be replaced by
computer workstations that had their own processing power and memory and,
more importantly, the ability to run applications separate from the mainframe.
Early distributed systems were nothing more than extensions of a single-vendor
solution (bought from a single vendor) over modem or dedicated
leased lines. Because the vendor controlled all aspects of the system, it was easy
for that vendor to develop the communication functions that were needed to
make their centralized systems distributed. These types of systems are known as
"closed" systems because they only interoperate with other systems from the
same manufacturer. Apple Computer and Novell were among the first companies
to deliver distributed (although still proprietary) networking systems.
Distributed processing was complicated. It required addressing, error
control, and synchronized coordination between systems. Unfortunately, the
communication architectures designed to meet those requirements were not
compatible across vendors' boundaries. Many closed proprietary systems were
developed, most notably IBM's System Network Architecture (SNA) and Digital
Equipment Corporation's DECNet. Down the road, other companies such as
Novell and Apple followed suit. In order to open up these "closed systems," a
framework was needed which would allow interoperability between various
OSI to the Rescue
OSI (Open System Interconnection), developed by the International Organization
for Standardization (ISO), was the solution designed to promote interoperability
between vendors. It defines an architecture for communications that support distributed
processing. The OSI model describes the functions that allow systems
to communicate successfully over a network. Using what is called a layered
approach, communications functions are broken down into seven distinct layers.
The seven layers, beginning with the bottom layer of the OSI model, are as follows:
* Layer 1: Physical layer
* Layer 2: Data link layer
* Layer 3: Network layer
* Layer 4: Transport layer
* Layer 5: Session layer
* Layer 6: Presentation layer
* Layer 7: Application layer
Each layer provides a service to the layers above it, but also depends on services
from the layers below it. The model also provides a layer of abstraction
because upper layers do not need to know the details of how the lower layers
operate; they simply must possess the ability to use the lower layers' services.
The model was created so that in a perfect world any network layer protocol,
such as IP (Internet Protocol), IPX (Internet Packet Exchange), or X.25, could
operate regardless of the physical media it runs over. This concept applies to all
of the layers, and in later chapters you can see how some application protocols
function identically over different network protocols (and sometimes even different
vendors-Server Message Block (SMB) is a perfect example of this as it is
used by Microsoft, IBM, and Banyan's server operating systems). Most communication
protocols map very nicely to the OSI model.
Defining the Layers
Because almost all protocols are based on the OSI model, it is important to
completely understand how the model operates, and to understand the protocols,
you must first understand the framework. The following sections explain
the seven layers in more detail, and Figure 1-1 gives examples of protocols
that reside at each layer.
Layer 1: Physical Layer
The simplest definition of the physical layer is that it deals with how binary
data is translated into signals and transmitted across the communications
medium. (I talk more about media in the "Detailed Layer Analysis" section
later in this chapter.) The physical layer also comprises the functions and procedures
that are responsible for the transmission of bits. Examples would be
procedures such as RS-232 handshaking or zero substitution functions on
B8ZS T1 circuits. The physical layer concerns itself only with sending a stream
of bits between two devices over a network.
Layer 2: Data Link Layer
Layer 2, the data link layer, handles the functions and procedures necessary for
coordinating frames between devices. At the data link layer, zeros and ones are
logically grouped into frames with a defined beginning and end. Unlike the
physical layer, the data link layer contains a measure of intelligence. Ethernet, a
common Layer 2 protocol, contains detection algorithms for controlling collision
detection, corrupted frames, and address recognition. Higher layers depend on
the data link layer not only to provide an error-free path but also to detect errors
that may occur. Corrupted data should never be passed to upper layers.
Layer 3: Network Layer
Layer 3 is the end-to-end communications provider. Whereas the data link
layer's responsibility ends at the next Layer 2 device, the network layer is
responsible for routing data from the source to the destination over multiple
Layer 2 paths. Applications utilizing a Layer 3 protocol do not need to know
the details of the underlying Layer 2 network. Layer 3 networks, such as those
using the Internet Protocol, will span many different Layer 2 technologies such
as Ethernet, Token Ring, Frame Relay, and Asynchronous Transfer Mode
(ATM). Some examples of Layer 3 protocols are IP, IPX, and AppleTalk Datagram
Delivery Protocol (DDP). Although the network layer is responsible for
the addressing and routing of data from source to destination, it is not responsible
for guaranteeing its delivery.
Layer 4: Transport Layer
Networks are not reliable. On Ethernet networks, collisions can occur resulting
in data loss, switches can drop packets due to congestion, and networks themselves
can lose data due to overloaded links (the Internet itself experiences
anomalies such as these on a daily basis). Protocols that operate in the transport
layer may retransmit lost data, perform flow control between end systems, and
many times add an extra layer of error protection to application data. While the
network layer delivers data between two endpoints, the transport layer can
guarantee that it gets to its destination.
Layer 5: Session Layer
The session layer provides the ability to further control communications
between end systems by providing another layer of abstraction between transport
protocols and the application. If an application layer protocol possesses
this functionality, a session layer protocol may not be needed. NetBIOS, as you
will see later in this chapter, is a perfect example of a session layer protocol.
Sometimes the session layer does not reveal itself as a protocol, but rather as a
procedure performed to allow a protocol to continue its functions. Even
though a protocol will exist at a certain layer, a procedure of that protocol can
sometimes perform functions that normally reside in another layer. I will note
instances in later chapters where this anomaly takes place.
Layer 6: Presentation Layer
The presentation layer is another layer that sometimes does not manifest itself
in obvious ways. The presentation layer handles making sure that data formats
used by application layer protocols are compatible between end systems.
Some examples of Layer 6 would be ASCII, JPG, and ASN.1. Just as I indicated
was the case with Layer 5, some protocol functions performed in other layers
fit nicely into the description of the presentation layer.
Layer 7: Application Layer
Many people confuse Layer 7 with the applications used on servers or workstations.
Application layer protocols are not user applications but instead the
protocols that allow those applications to operate over a network. A user
browsing the Internet with Internet Explorer utilizes an application layer protocol
called HTTP. Microsoft Word users saving files to a network server make
use of the Server Message Block (SMB) protocol. To a user, a network drive
simply appears as G:\, but in the background there are powerful application
layer protocols that allow G:\ to represent a location on a remote server. Other
examples of application layer protocols are FTP and Telnet.
Protocol Analysis of the Layers
The following sections comprise a protocol analysis approach to the OSI
model. They explain what each layer does and, more importantly, why. How
each layer performs its function is left up to the protocol designers. I discuss
how TCP/IP performs its functions in Chapters 3 through 6. More advanced
readers may notice some vague or overly generic descriptions of packet
descriptions in the following sections. I have written the descriptions this way
to provide a generic blueprint for describing the layer's functionality; the
details follow later in the book.
Layer 1: The Physical Layer
As I indicated earlier in the chapter, the physical layer concerns itself with how
communications signals are transmitted across a medium. Appropriately, a
medium is defined as a path where communication signals can be carried. A
path is anything from copper, water, or air to even barbed wire if you can get
the signals to successfully transmit over it. Media carry communication
signals. In wireless networks, signals travel over air as RF (radio frequency)
radio waves. On 10BaseT Ethernet networks, they are carried as electrical voltage.
In Fiber Distributed Data Interface (FDDI) networks, glass is used as the
medium; the signals travel as pulses of light over glass fiber-optic cables.
Many reasons exist as to why specific types of media are used in different technologies.
Theoretically, you should be able to use whatever medium you want
to carry the signals; unfortunately, the way those signals are represented
places limitations on the types of media you can use.
Communications signals are transmitted in two ways. The first method, analog,
is used to transmit signals that have values that vary over time. Sound is
a perfect example of an analog signal. Sound is measured as an analog signal
in cycles per second or hertz. The range of the human voice varies from about
100 Hz to 1,500 Hz. When early telephone networks were developed, it was
difficult to create good-quality long-distance communications using analog
signals because when these analog signals were amplified there was no way
to distinguish the noise from the voice signal. As the analog voice signal was
amplified, so was the noise. Converting analog voice signals to digital signals
was one way to solve this problem.
Unlike analog signals, digital signals have only discrete values, either a one or a
zero. Early digital telephone engineers figured out a way to modulate an analog
signal onto a digital carrier using something called pulse code modulation,
or PCM. PCM lets the instantaneous frequency of an analog signal be represented
by a binary number. Instead of an amplifier having to guess at which
signal to amplify, now it just had to repeat either a zero or a one. Using this
method greatly improved the quality of long-distance communications. When
computer data needed to be transmitted across network links, the decision to
use digital signaling was easy. Since computers already represented data using
zeros and ones, these zeros and ones could very easily be transmitted across
How these ones and zeros are represented is what digital signaling is all
about. On 10BaseT Ethernet networks, data is represented by electrical voltage;
a one is represented by a transition from -2.05 V to 0 V and a zero is represented
by a transition from 0 V to -2.05 V. Over fiber-optic networks, a one
might be represented by a pulse of light and a zero by the absence of light. The
process isn't quite that simple, but the concept is basically the same.
Excerpted from TCP/IP Analysis and Troubleshooting Toolkit
by Kevin Burns
Copyright © 2003 by Kevin Burns.
Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
About the Authors.
PART I: FOUNDATIONS OF NETWORK ANALYSIS.
Introduction to Protocol Analysis.
Analysis Tools and Techniques.
Inside the Internet Protocol.
Internet Control Message Protocol.
User Datagram Protocol.
Transmission Control Protocol.
PART III: TCP/IP PROTOCOLS.
Upper Layer Protocols.
Appendix A: What's on the Web.
Appendix B: SMB Status Codes.
Posted May 6, 2009
As a computer network analyst, and network forensics specialist. I know how important it is to decipher the difference between good IP traffic, and Bad. Today with the growing threat of High Payed, certified organized white collar crime it is vital to know, and understand the tools and tactics they use to alienate, suppress, segregate others from living prosperous, happy family lives. This book gives you all the knowledge needed to understand the inner workings of a TCP/IP protocol network. How to resolve, and troubleshoot network problems, and more.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted October 29, 2003