Private Cloud Computing: Consolidation, Virtualization, and Service-Oriented Infrastructure [NOOK Book]

Overview

Private cloud computing enables you to consolidate diverse enterprise systems into one that is cloud-based and can be accessed by end-users seamlessly, regardless of their location or changes in overall demand. Expert authors Steve Smoot and Nam K. Tan distill their years of networking experience to describe how to build enterprise networks to create a private cloud. With their techniques you'll create cost-saving designs and increase the flexibility of your enterprise, while maintaining the security and control ...
See more details below
Private Cloud Computing: Consolidation, Virtualization, and Service-Oriented Infrastructure

Available on NOOK devices and apps  
  • NOOK Devices
  • Samsung Galaxy Tab 4 NOOK
  • NOOK HD/HD+ Tablet
  • NOOK
  • NOOK Color
  • NOOK Tablet
  • Tablet/Phone
  • NOOK for Windows 8 Tablet
  • NOOK for iOS
  • NOOK for Android
  • NOOK Kids for iPad
  • PC/Mac
  • NOOK for Windows 8
  • NOOK for PC
  • NOOK for Mac
  • NOOK for Web

Want a NOOK? Explore Now

NOOK Book (eBook)
$59.95
BN.com price

Overview

Private cloud computing enables you to consolidate diverse enterprise systems into one that is cloud-based and can be accessed by end-users seamlessly, regardless of their location or changes in overall demand. Expert authors Steve Smoot and Nam K. Tan distill their years of networking experience to describe how to build enterprise networks to create a private cloud. With their techniques you'll create cost-saving designs and increase the flexibility of your enterprise, while maintaining the security and control of an internal network. Private Cloud Computing offers a complete cloud architecture for enterprise networking by synthesizing WAN optimization, next-generation data centers, and virtualization in a network-friendly way, tying them together into a complete solution that can be progressively migrated to as time and resources permit.

  • Describes next-generation data center architectures such as the virtual access-layer, the unified data center fabric and the "rack-and-roll" deployment model
  • Provides an overview of cloud security and cloud management from the server virtualization perspective
  • Presents real-world case studies, configuration and examples that allow you to easily apply practical know-how to your existing enterprise environment
  • Offers effective private cloud computing solutions to simplify the costly and problematic challenge of enterprise networking and branch server consolidation
Read More Show Less

Editorial Reviews

From the Publisher
"This book aims at well experienced IT architects and professionals who can benefit from the technical solutions that the book itself reveals."—Computers and Security
Read More Show Less

Product Details

  • ISBN-13: 9780123849205
  • Publisher: Elsevier Science
  • Publication date: 11/26/2011
  • Sold by: Barnes & Noble
  • Format: eBook
  • Pages: 424
  • File size: 8 MB

Meet the Author

Stephen R. Smoot, Ph.D., helped start up Riverbed Technology in February 2003, and currently serves as senior vice president of technical operations, running the technical support, technical publications, technical marketing, and global consulting engineering groups. He spends his time thinking about where technology is going and helping customers to solve their problems. Smoot previously worked on acceleration and video at Inktomi Corporation (now a part of Yahoo). He joined Inktomi, following its acquisition of FastForward Networks, which designed overlay network technology for streaming video with millions of viewers over the Internet. Smoot previously worked at Imedia (Motorola), Honeywell, and IBM. Smoot received his doctorate in computer science from the University of California at Berkeley, working with Lawrence Rowe. His dissertation, “Maximizing Perceived Quality at Given Bit-Rates in MPEG Encoding,” describes various aspects of creating MPEG video from its original video source. He also holds a master’s degree in computer science from the University of California, Berkeley. His undergraduate education was at MIT where he received bachelor’s degrees in computer science and mathematics.

Nam-Kee Tan, CCIE #4307, has been in the networking industry for more than 16 years. He is dual CCIE in routing and switching and service provider and has been an active CCIE for more than 10 years. His areas of specialization include advanced IP services, network management solutions, MPLS applications, L2/L3 VPN implementations, next-generation data center technologies, and storage networking. Nam-Kee is currently the lead network architect in the Riverbed advanced network engineering team where he designs and deploys cloud computing service infrastructures and virtual data center solutions for Riverbed enterprise and service provider customers. Nam-Kee also advises internal Riverbed engineers in the area of next-generation service provider technologies. Nam-Kee is the author of Configuring Cisco Routers for Bridging, DLSwþ, and Desktop Protocols (1999, ISBN 0071354573); Building VPNs with IPSec and MPLS (2003, ISBN 0071409319), and MPLS for Metropolitan Area Networks (2004, ISBN 084932212X); and is co-author of Building Scalable Cisco Networks (co-author, 2000, ISBN: 0072124776). He holds a master’s degree in data communications from the University of Essex, UK, and an MBA from the University of Adelaide, Australia.

Read More Show Less

Read an Excerpt

Private Cloud Computing

Consolidation, Virtualization, and Service-Oriented Infrastructure
By Stephen R. Smoot Nam K. Tan

Morgan Kaufmann

Copyright © 2012 Elsevier, Inc.
All right reserved.

ISBN: 978-0-12-384920-5


Chapter One

Next-Generation IT Trends

Architecture is the reaching out for the truth.Louis Kahn

INFORMATION IN THIS CHAPTER:

• Layers of Function: The Service-Oriented Infrastructure Framework

• Blocks of Function: The Cloud Modules

• Cloud Computing Characteristics

• Cloud Computing Taxonomy

• Summary

INTRODUCTION

This book is about building a next-generation IT infrastructure. To understand what that means, one needs to start by looking at what constitutes current-generation IT infrastructure. But how did we arrive at the current infrastructure? To get a sensible perspective on that, it helps to back up and look at where computing started.

In the early days of computing, there was a tight connection among users, computers, and applications. A user would typically have to be in the same building as the computer, if not in the very same room. There would be little or no ambiguity about which computer the application was running on. This description holds up when one thinks about the "early days" as referring to an ENIAC, an IBM 360, or an Apple II.

Since those days, enterprise IT organizations have increasingly used networking technologies to allow various kinds of separation. One set of technologies that goes by the name of storage networking allows computers and the storage underpinning applications to be separated from each other to improve operational efficiency and flexibility. Another set of technologies called local-area networking allows users to be separated from computing resources over small (campus-size) distances. Finally, another set of technologies called wide-area networking allows users to be separated from computing resources by many miles, perhaps even on the other side of the world. Sometimes practitioners refer to these kinds of networks or technologies by the shorthand SAN, LAN, and WAN (storage-area network, local-area network, wide-area network, respectively). The most familiar example of a WAN is the Internet, although it has enough unique characteristics that many people prefer to consider it a special case, distinct from a corporate WAN or a service provider backbone that might constitute one of its elements.

It is worth considering in a little more detail why these forms of separation are valuable. Networking delivers obvious value when it enables communication that is otherwise impossible, for example, when a person in location A must use a computer in location B, and it is not practical to move either the person or the computer to the other location. However, that kind of communication over distance is clearly not the motivation for storage networking, where typically all of the communicating entities are within a single data center. Instead, storage networking involves the decomposition of server/storage systems into aggregates of servers talking to aggregates of storage.

New efficiencies are possible with separation and consolidation. Here's an example: suppose that an organization has five servers and each uses only 20% of its disk. It turns out that it's typically not economical to buy smaller disks, but it is economical to buy only two or three disks instead of five, and share those among the five servers. In fact, the shared disks can be arranged into a redundant array of independent disks (RAID) configuration that will allow the shared disks to handle a single disk failure without affecting data availability—all five servers can stay up despite a disk failure, something that was not possible with the disk-per-server configuration. Although the details vary, these kinds of cost savings and performance improvements are typical for what happens when resources can be aggregated and consolidated, which in turn typically requires some kind of separation from other kinds of resources.

Although these forms of networking (SAN, LAN, WAN) grew up more or less independently and evolved separately, all forms of networking are broadly equivalent in offering the ability to transport bit patterns from some origin point to some destination point. It is perhaps not surprising that over time they have borrowed ideas from each other and started to overlap or converge. Vendors offering "converged" or "unified" data center networking are effectively blurring the line between LAN and SAN, while vendors offering "WAN optimization" or "virtual private LAN services" are encouraging reconsideration of the separation between LAN and WAN.

Independently of the evolution of networking technologies, IT organizations have increasingly used virtualization technologies to create a different kind of separation. Virtualization creates the illusion that the entire computer is available to execute a progam while the physical hardware might actually be shared by multiple such programs. Virtualization allows the logical server (executing program) to be separated cleanly from the physical server (computer hardware). Virtualization dramatically increases the flexibility of an IT organization, by allowing multiple logical servers—possibly with radically incompatible operating systems—to share a single physical server, or to migrate among different servers as their loads change.

Partly as a consequence of the rise of the Internet, and partly as a consequence of the rise of virtualization, there is yet a third kind of technology that is relevant for our analysis. Cloud services offer computing and storage accessed over Internet protocols in a style that is separated not only from the end-users but also from the enterprise data center.

A cloud service must be both elastic and automatic in its provisioning—that is, an additional instance of the service can be simply arranged online without requiring manual intervention. Naturally, this also leads to requirements of automation with respect to both billing for and terminating service, or else those areas would become operational bottlenecks for the service. The need for elastic automatic provisioning, billing, and termination in turn demand the greatest possible agility and flexibility from the infrastructure.

If we want to build a cloud service—whether public or private, application focused or infrastructure focused—we have to combine the best available ideas about scaling, separation of concerns, and consolidating shared functions. Presenting those ideas and how they come together to support a working cloud is the subject of this book.

There are two styles of organization for the information presented in the rest of the book. The first is a layered organization and the other is a modular organization. The next two sections explain these two perspectives.

LAYERS OF FUNCTION: THE SERVICE-ORIENTED INFRASTRUCTURE FRAMEWORK

There are so many different problems to be solved in building a next-generation infrastructure that it's useful to organize the approach into layers. The top layer supplies various kinds of fantastically powerful, incredibly flexible services to end-users. The bottom layer is a collection of off-the-shelf hardware of various kinds—servers, storage, networking routers and switches, and long-distance telecom services. The intervening layers use the relatively crude facilities of the lower layers to build a new set of more sophisticated facilities.

This particular layered model is called a service-oriented infrastructure (SOI) framework and is illustrated in Figure 1.1. The layer immediately above the physical hardware is concerned with virtualization—reducing or eliminating the limitations associated with using particular models of computers, particular sizes of disks, and so on. The layer above that is concerned with management and provisioning—associating the idealized resources with the demands that are being placed. A layer above management and provisioning exports these automatic capabilities in useful combinations through a variety of network interfaces, allowing the resources to be used equally well for a high-level cloud software as a service (SaaS) and a lower-level cloud infrastructure as a service (IaaS).

In the course of discussing choices and trade-offs to be made, there will be references to these layers.

BLOCKS OF FUNCTION: THE CLOUD MODULES

Throughout the book, while keeping in mind the SOI structure, the chapters are organized around a different paradigm: consider a cloud computer to be made of various modules roughly similar to the central processing unit (CPU), RAM, bus, disk, and so on that are familiar elements of a conventional computer. As illustrated in Figure 1.2, there are several modules making up this functional layout:

• Server module

• Storage module

• Fabric module

• WAN module

• End-user type I—branch office

• End-user type II—mobile

Server module

The server module is analogous to the CPU of the cloud computer. The physical servers or server farm within this module form the core processors. It is "sandwiched" between a data center network and a storage area network.

As previously noted, server virtualization supports multiple logical servers or virtual machines (VMs) on a single physical server. A VM behaves exactly like a standalone server, but it shares the hardware resources (e.g., processors, disks, network interface cards, and memory) of the physical server with the other VMs. A virtual machine monitor (VMM), often referred to as a hypervisor, makes this possible. The hypervisor issues guest operating systems (OSs) with a VM and monitors the execution of these guest OSs. In this manner, different OSs, as well as multiple instances of the same OS, can share the same hardware resources on the physical server. Figure 1.3 illustrates the simplified architecture of VMs.

Server virtualization reduces and consolidates the number of physical server units required in the data center, while at the same time increasing the average utilization of these servers. For more details on server consolidation and virtualization, see Chapter 2, Next-Generation Data Center Architectures and Technologies.

Storage module

The storage module provides data storage for the cloud computer. It comprises the SAN and the storage subsystem that connects storage devices such as just a bunch of disks (JBOD), disk arrays, and RAID to the SAN. For more details on SAN-based virtualization, see Chapter 2.

SAN extension

SAN extension is required when there is one or more storage modules (see Figure 1.2) across the "cloud" (WAN module) for remote data replication, backup, and migration purposes. SAN extension solutions include Wave-Division Multiplexing (WDM) networks, Time-Division Multiplexing (TDM) networks, and Fibre Channel over IP (FCIP). For more details on SAN extension solutions, see Chapter 7, SAN Extensions and IP Storage.

Fabric module

The fabric module functions somewhat like a cloud computer bus system that transfers data between the various cloud computing modules. In Figure 1.2, the server farm in the server module is sandwiched between a data center network (typically Ethernet) and an SAN, which is really a Fibre Channel (FC). The SAN is referred to as an isolated fabric topology. FC SANs are also known as SAN islands because FC uses a wholly different protocol stack from TCP/IP.

The main impetus of the fabric module is to transform this isolated fabric topology (IFT) to a unified fabric topology (UFT). How to achieve this UFT? The short answer is to extend or more precisely, to encapsulate the Fibre Channel over Ethernet (FCoE). The previous deterrent to using Ethernet as the basis for a unified fabric was its limited bandwidth. With the advent of 10-gigabit Ethernet, the available bandwidth now offers the feasibility to consolidate various traffic types over the same link. For more information on FCoE, see Chapter 2.

WAN module

The WAN module is the enterprise's intranet (internal access), extranet (business-to-business access), Internet (public access) over a WAN, and metropolitan-area network (MAN). From the cloud computing user's perspective, the WAN module provides access to the cloud. The main purpose of the WAN module is to extend the cloud computer access to local or remote campuses, branches or remote offices, teleworkers or home offices, and mobile users or road warriors. The actual connectivity provided by the WAN module is accomplished using a variety of network technologies, including long-haul fiber networks and mobile technologies such as 802.11 wireless Ethernet.

Network virtualization

As each end-user requires some level of isolation from each other and from each other's computing resources, one of the core requirements for the cloud computing environment is the creation of independent or isolated logical traffic paths over a shared physical network infrastructure and, for that matter, across the WAN.

(Continues...)



Excerpted from Private Cloud Computing by Stephen R. Smoot Nam K. Tan Copyright © 2012 by Elsevier, Inc.. Excerpted by permission of Morgan Kaufmann. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

Chapter 1 – Next-Generation IT Trends
  • Layers of Function: The Service-Oriented Infrastructure Framework
  • Blocks of Function: The Cloud Modules
  • Cloud Computing Characteristics
  • Computing Taxonomy


Chapter 2 – Next-Generation Data Center Architectures and Technologies
  • The Data Center Consolidation and Virtualization Modus Operandi
  • Server Consolidation Drivers
  • Server Virtualization
  • Storage Virtualization
  • Layer 2 Evolutions
  • Unified Data Center Fabric


Chapter 3 – Next-Generation WAN and Service Integration
  • Service Integration in the Data Center
  • Infrastructure Segmentation
  • The Next-Generation Enterprise WAN


Chapter 4 – Branch Consolidation and WAN Optimization
  • What is the WAN performance challenge?
  • WAN Optimization Benefits
  • Requirements for WAN Optimization Deployment
  • Remote Office Virtualization Designs


Chapter 5 – Session Interception Design and Deployment
  • Selecting an Interception Mechanism
  • The WCCP Dive
  • In-path Deployment In-Brief
  • PBR Deployment Overview


Chapter 6 – WAN Optimization in the Private Cloud
  • WAN Optimization Requirements in the Cloud
  • Interception at the VM Level
  • Cloud Interception with VRF-Aware WCCP
  • Cloud Interception with Non VRF-Aware WCCP
  • Interception at the Services Aggregation Layer


Chapter 7 – SAN Extensions and IP Storage
  • SAN Extension Overview
  • Optical Networking Solutions
  • SONET/SDH Services
  • FCIP
  • iSCSI
  • Putting It All Together


Chapter 8 – Cloud Infrastructure as a Service
  • Cloud Security
  • Unified Computing System
  • Cloud Management
  • Cloud IaaS: The Big Picture


Chapter 9 – Case Studies
  • Virtual Access-Layer Design Study
  • ERSPAN Design Study
  • Unified Fabric Design Study
  • Top-of-Rack Architecture Design Study
  • Basic vPC Design Study
  • SAN Extension Design Study
  • Service-Oriented Infrastructure Design Study
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing 1 Customer Reviews
  • Posted December 31, 2011

    A VERY VERY HIGHLY RECOMMENDED EXCELLENT BOOK!!

    Are you a network engineer, solution architect, internetworking professional, IT manager, CIO, and service provider? If you are, then this book is for you! Authors Stephen R Smoot and Nam K Tan, have done an outstanding job of writing a book that enables you to consolidate services from data centers and remote branch offices. Smoot and Tan, begin by showing you how to build a next-generation IT infrastructure. In addition, the authors discuss DC virtualization, particularly server and storage virtualization. They then discuss the virtualization and integration of intelligent service elements such a firewalls and server load balances in the services aggregation layer before moving on to tackle the various next generation WAN technologies for the WAN Module of the enterprise. The authors then, examine the WAN performance problem from a theoretical viewpoint to derive the required elements of a solution. Next, the authors focus on the design and deployment of the various WAN optimization interception and redirection mechanisms in the DC. In addition, the authors further elaborate on the session interception techniques that can be deployed in private cloud computing building blocks. They then look at a different private computing building block¿the storage module. Next, the authors cover the various security considerations from the server virtualization perspective when implementing cloud IaaS offerings. Finally, they conclude with a number of case studies that include design; as well as, deployment topics, basic configurations, and tutorials on some of the more complex concepts. The primary goal of this most excellent book, is to allow you to leverage WAN optimization in order to keep performance high. Perhaps more importantly, this book focuses on building a routing and switching platform to provide a foundation for cloud computing services.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing 1 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)