- Shopping Bag ( 0 items )
However, traditional networking protocols were built for largely non-real time data with very few burst requirements. The protocol stack at a network node is fixed and the network nodes only manipulate protocols up to the network layer. New protocols such as RTP and HTTP enable the network to transport other types of application data like real time and multimedia data. Such protocols cater to specific demands of the application data. Transporting those new data types over a legacy network requires the transformation of the data of a new type into data of a type carried by the network. However, transforming the data to fit legacy protocol requirements prevents one from understanding the transformed protocol. For example, embedding an MPEG frame in MIME format prevents one from easily recognizing an I, P or B frame. This prevents the network from taking suitable action on the MPEG frame during times of congestion. If information about the frame (e.g., the type of the frame, whether I, P or B) is not converted into MIME but the frame itself is converted, then both the goals of encoding and congestion control are satisfied.
Traditional protocol frameworks use layering as a composition mechanism. Protocols in one layer of the stack cannot guarantee anything about the properties of the layers underneath it. Each protocol layer is treated like a black box, and there is no mechanism to identify whether functional redundancies occur in the stack. Sometimes, protocols in different layers of the same stack need to share information. For example, TCP calculates a checksum over the TCP message and the IP header. But this action violates modularity of the layering model because the TCP module needs information from the IP header that it gets by directly accessing the IP header. Furthermore, layering hides functionality, which can introduce redundancy in the protocol stack. Introducing new protocols in the current infrastructure is a difficult and time-consuming process. A committee has to agree on the definition of a new protocol. This involves agreeing on a structure, states, algorithms, and functions for the protocol. All these issues require a consensus agreement on the part of a committee that standardizes the protocol. Experience has shown that the standardization is a time-consuming process. The time from conceptualization of a protocol to its actual deployment in the network is usually a matter of years. For example, work on the design of the Internet Protocol version 6 (IPv6) was initially started in 1995, but the protocol has still not found widespread deployment. Once the standardization process is completed, it is followed by identical implementations of the protocol in all devices. However, variations in the implementation by different network hardware vendors causes problems for interoperability. Vendor implementation of a protocol may differ if the vendors provide value-added features in their device or if they tweak the implementation to exploit hardware-specific features.
Another issue that vendors have to deal with is backward compatibility. A revision of a protocol may need to change the positions of the bits in the header of the protocol to accommodate more information. However, network devices upgraded with the new protocol still have to support data that conforms to the earlier revision. For example, the address field in the Internet Protocol (version 4) is 32 bits, as defined in the standards document. This implies that the protocol (and hence the network) supports a maximum of 232 addresses. The tremendous growth of the Internet and the introduction of Internetcapable devices indicates that we are likely to run out of IP numbers in a very short time. Increasing the length of the address field in the IP header is not a solution to this problem because implementing the revised protocol is a formidable task. Increasing the length of the address field affects the positions of the succeeding fields in the protocol header as the bits are shifted by the appropriate number of positions. All software related to IP layer processing relies on the fields being in their correct positions, and therefore all existing communication software would have to be rewritten to accommodate the change. This requires updating tens of thousands of existing routers and switches that implement IPv4 with the new protocol software.
Active networking provides a flexible, programmable model of networking that addresses the concerns and limitations described above. In an active networking paradigm, the nodes of the network provide execution environments that allow execution of code dynamically loaded over the network. Thus, instead of standardizing individual protocols, the nodes of the network present a standardized execution platform for the codecarrying packets. This approach eliminates the need for network-wide standardization of individual protocols. New protocols and services can be rapidly introduced in the network. Standardizing the execution platform implies that the format of the code inside the packets is also agreed upon. But users and developers can code their own custom protocols in the packets. The code may describe a new protocol for transporting video packets or it may implement a custom routing algorithm for packets belonging to an application. The ability to introduce custom protocols breaks down barriers to innovation and enables developers to customize network resources to effectively meet their application's needs. Standardizing the execution environment enables new protocols to be designed, developed and deployed rapidly and effortlessly. The protocol designer develops the protocol for the standardized execution environment and injects it into the network nodes for immediate deployment. This eliminates the need for a standards committee to design the protocol, for hardware vendors to implement it in their network devices, and for service providers to deploy the new devices in their networks. Application designers can write custom code that performs custom computation on the packets belonging to the application. This customizes network resources to meet the immediate requirements of the application. Thus the programmable interface provided by active networking enables applications to interact with the network and adapt to underlying network characteristics. Note that active networks differ from efforts underway in programmable networks. Active networks carry executable within packets while programmable networks focus on a standard programming interface for network control.
Posted September 23, 2002
Learning more about Active Networks was my main drive for purchasing this book. After picking up this book, I'm glad I made the decision. This book covers pretty much all the main points about Active Networks and Active Network Management. It dives into the theory behind network management and proactive network management. The authors are well versed and respected in their field, and this book is a good display of their depth and level of understanding about the topics covered. A great addition to my bookshelf.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.