Professional Application Lifecycle Management with Visual Studio 2010

Overview

Get up to speed on Application Lifecycle Management (ALM) with Visual Studio 2010 through a combination of hands-on instruction and deep-dives.

Microsoft has packed a lot of brand new testing and modeling tools into Visual Studio 2010, tools that previously were available only to Microsoft internal development teams. Developers will appreciate the focus on practical implementation techniques and best practices.

A team of Microsoft insiders ...

See more details below
Other sellers (Paperback)
  • All (24) from $1.99   
  • New (7) from $9.98   
  • Used (17) from $1.99   

Overview

Get up to speed on Application Lifecycle Management (ALM) with Visual Studio 2010 through a combination of hands-on instruction and deep-dives.

Microsoft has packed a lot of brand new testing and modeling tools into Visual Studio 2010, tools that previously were available only to Microsoft internal development teams. Developers will appreciate the focus on practical implementation techniques and best practices.

A team of Microsoft insiders provides a nuts-and-bolts approach. This Wrox guide is designed as both a step-by-step guide and a reference for modeling, designing, and coordinating software development solutions at every level using Visual Studio 2010 and Visual Studio Team Foundation Server 2010.

Visual Studio 2010 offers a complete lifecycle management system that covers modeling, testing, code analysis, collaboration, build and deployment tools.

Read More Show Less

Product Details

  • ISBN-13: 9780470484265
  • Publisher: Wiley
  • Publication date: 4/12/2010
  • Edition number: 1
  • Pages: 696
  • Product dimensions: 7.40 (w) x 9.20 (h) x 1.40 (d)

Meet the Author

MICKEY GOUSSET is a Senior Technical Developer for Infront Consulting Group, a consulting company focused on the Microsoft System Center family of products. He has been a Microsoft Team System MVP fi ve years running, a certifi ed professional in Team Foundation Server and SCOM 2007, and co-author (along with Jean-Luc David and Erik Gunvaldson) of the book Professional Team Foundation Server (Indianapolis: Wiley, 2006). Gousset runs “Team System Rocks!” (http://www.teamsystemrocks.com), a community site devoted to Visual Studio Team System and Visual Studio 2010, where he also blogs about Visual Studio and Team Foundation Server. He is also a co-host of the popular Team Foundation Server podcast, “Radio TFS” (http://www.radiotfs.com). He has spoken on Visual Studio and Team Foundation Server topics at various user groups, code camps, and conferences, including Microsoft Tech Ed Developer — North America 2008 and 2009. When not writing or working with computers, Mickey enjoys a range of hobbies, from playing on Xbox Live (“Gamer Tag: HereBDragons”) to participating in local community theater. Nothing beats his favorite pastime though — sitting on his couch with his lovely wife Amye, and their two Chihuahuas, Lucy and Linus.

BRIAN KELLER is a Senior Technical Evangelist for Microsoft, specializing in Visual Studio and application lifecycle management. Keller has been with Microsoft since 2002, and has presented at conferences all over the world, including TechEd, Professional Developers Conference (PDC), and MIX. Keller is also a regular personality on MSDN’s Channel 9 Web site, and is co-host of the popular show, “This Week on Channel 9.” Outside of work, he can usually be found enjoying the great outdoors while either rock climbing, backpacking, skiing, or surfing.

AJOY KRISHNAMOORTHY is a Senior Product Manager in the Microsoft Patterns and Practices group. In this role, he focuses on planning the areas of investments and business strategy for Patterns and Practices. Prior to this role, Krishnamoorthy worked as a Senior Product Manager for Microsoft Visual Studio Team System. He has more than ten years of consulting experience, playing variety of roles, including developer, architect, and technical project manager. Krishnamoorthy has written articles for online and printed magazines, and co-authored several books on ASP.NET. You can check out his blog at http://blogs.msdn.com/ajoyk. Krishnamoorthy has an MBA from Ohio State University. Any spare time is spent with his family, playing board/card games with friends, watching sports (especially when the Ohio State Buckeyes are playing), and learning to play “Tabla.”

MARTIN WOODWARD is currently the Program Manager for the Microsoft Visual Studio Team Foundation Server Cross-Platform Tools Team. Before joining Microsoft, Woodward was voted Team System MVP of the Year, and has spoken about Team Foundation Server at events internationally. Not only does Woodward bring a unique insight into the inner workings of the product he has experienced from more than a half-decade of real-world use at companies big and small, he is also always happy to share. When not working or speaking, Woodward can be found at his blog, http://www.woodwardweb.com.

Read More Show Less

Read an Excerpt

Professional Application Lifecycle Management with Visual Studio 2010


By Mickey Gousset Brian Keller Ajoy Krishnamoorthy Martin Woodward

John Wiley & Sons

Copyright © 2010 John Wiley & Sons, Ltd
All right reserved.

ISBN: 978-0-470-48426-5


Chapter One

Introduction to Software Architecture

WHAT'S IN THIS CHAPTER?

* Why designing visually is important

* Microsoft's approach to a modeling strategy

* Modeling tools in Visual Studio 2010 Ultimate

In this introductory chapter, you'll learn about the main themes - domain-specific languages (DSLs), model-driven development (MDD), and the Unified Modeling Language (UML) - and how they apply to the Visual Studio 2010 Ultimate. As part of this discussion, you'll learn what Microsoft has to say on those subjects, as well as some impartial views from the authors.

This chapter examines the evolution of distributed computing architectures - from simple object-oriented development, through component and distributed-component design, to the service-oriented architectures (SOAs) - that represent the current state of the art.

This chapter wraps up with a brief glimpse at the new architecture tools in Visual Studio 2010. New modeling tools, as well as support for the most common Unified Modeling Language diagrams, have been added to Visual Studio 2010, making the architecture tools first-class citizens in the product.

Let's begin by first establishing the case for even undertaking visual modeling - or visual design - in the first place.

DESIGNING VISUALLY

Two elementary questions immediately come to mind. Why design at all, rather than just code? Why design visually?

To answer the first question, consider the common analogy of building complex physical structures, such as bridges. Crossing a small stream requires only a plank of wood - no architect, no workers, and no plans. Building a bridge across a wide river requires a lot more - a set of plans drawn up by an architect so that you can order the right materials, planning the work, communicating the details of the complex structure to the builders, and getting a safety certificate from the local authority. It's the same with software. You can write a small program by diving straight into code, but building a complex software system will require some forethought. You must plan it, communicate it, and document it to gain approval.

Therefore, the four aims of visual design are as follows:

* To help you visualize a system you want

* To enable you to specify the structure or behavior of a system

* To provide you with a template that guides you in constructing a system

* To document the decisions you have made

Traditionally, design processes like the Rational Unified Process have treated design and programming as separate disciplines, at least in terms of tools support. You use a visual modeling tool for design, and a separate integrated development environment (IDE) for coding. This makes sense if you treat software development like bridge building, and assume that the cost of fixing problems during implementation is much higher than the cost of fixing those problems during design.

For bridges, that is undoubtedly true. But in the realm of software development, is it really more costly to change a line of code than it is to change a design diagram? Moreover, just as bridge designers might want to prototype aspects of their design using real materials, so might software designers want to prototype certain aspects of their design in real code.

For these reasons, the trend has been toward tools that enable visual design and coding within the same environment, with easy switching between the two representations, thus treating design and coding as essentially two views of the same activity. The precedent was set originally in the Java space by tools such as Together-J and, more recently, in the .NET space by IBM-Rational XDE, and this approach has been embraced fully by the Visual Studio 2010 Ultimate.

Now, let's tackle the second question. If the pictorial design view and the code view are alternative but equivalent, representations, then why design visually at all? The answer to that question is simple: A picture paints a thousand words. To test that theory, just look at the figures in this chapter and imagine what the same information would look like in code. Then imagine trying to explain the information to someone else using nothing but a code listing.

MICROSOFT'S MODELING STRATEGY

As mentioned, Microsoft's Visual Studio 2010 modeling strategy is based on a couple of ideas:

* Domain-specific languages (DSLs) * Model-driven development (MDD)

These topics together comprise Microsoft's new vision for how to add value to the software development process through visual modeling.

First, let's set the scene. The Object Management Group (OMG) has a licensed brand called Model-Driven Architecture (MDA). MDA is an approach to MDD based on constructing platform-independent UML models (PIMs) supplemented with one or more platform-specific models (PSMs). Microsoft also has an approach to MDD, based not on the generic UML but rather on a set of tightly focused DSLs. This approach to MDD is part of a Microsoft initiative called software factories, which, in turn, is part of a wider Dynamic Systems Initiative.

If you would like a more in-depth exploration of software factories, check out the book, Software Factories: Assembling Applications with Patterns, Works, Models and Tools, written by Keith Short, Jack Greenfield, Steve Cook, and Stuart Kent (Indianapolis: Wiley, 2004).

Understanding Model-Driven Development

As a software designer, you may be familiar with the "code-generation" features provided by UML tools such as Rational Rose and IBM-Rational XDE. These tools typically do not generate "code" at all but merely "skeleton code" for the classes you devise. So, all you get is one or more source files containing classes populated with the attributes and operation signatures that you specified in the model.

The words "attribute" and "operation" are UML terminology. In the .NET world, these are often referred to as "field" and "method," respectively.

As stated in Microsoft's modeling strategy, this leads to a problem:

"If the models they supported were used to generate code, they typically got out of sync once the developers added other code around the generated code. Even products that did a good job of 'round tripping' the generated code eventually overwhelmed developers with the complexity of solving this problem. Often, these problems were exacerbated, because CASE tools tried to operate at too high a level of abstraction relative to the implementation platform beneath. This forced them to generate large amounts of code, making it even harder to solve the problems caused by mixing handwritten and generated code."

The methods that are generated for each class by UML code-generation tools typically have complete signatures but empty bodies. This seems reasonable enough, because, after all, the tool is not psychic. How would it know how you intend to implement those methods? Well, actually, it could know.

UML practitioners spend hours constructing dynamic models such as statecharts and sequence diagrams that show how objects react (to method invocations) and interact (invocate methods on other objects). Yet, that information, which could be incorporated into the empty method bodies, is lost completely during code generation.

Note that not all tools lose this kind of information during code generation, but most of the popular ones do. In addition, in some cases, UML tools do generate code within method bodies - for example, when you apply patterns using IBM-Rational XDE - but, in general, the point is valid.

Why do UML tools generally not take account of the full set of models during code generation? In part, it's because software designers do not provide information on the other models with sufficient precision to be as useful as auto-generated method bodies. The main reason for that is because the notation (UML) and tools simply do not allow for the required level of precision.

What does this have to do with MDD? Well, MDD is all about getting maximum value out of the modeling effort, by taking as much information as possible from the various models right through to implementation. As Microsoft puts it:

"Our vision is to change the way developers perceive the value of modeling. To shift their perception that modeling is a marginally useful activity that precedes real development; to recognition that modeling is an important mainstream development task ..."

Although the example of UML dynamic modeling information finding its way into implemented method bodies was useful in setting the scene, don't assume that MDD is only (or necessarily) about dynamic modeling. If you've ever constructed a UML deployment model and then tried to do something useful with it - such as generate a deployment script or evaluate your deployment against the proposed logical infrastructure - you will have seen how wasted that effort has been, other than to generate some documentation.

So, what's the bottom line? Because models are regarded as first-class development artifacts, developers write less conventional code, and development is, therefore, more productive and agile. In addition, it fosters a perception among all participants - developers, designers, analysts, architects, and operations staff - that modeling actually adds value to their efforts.

Understanding Domain-Specific Languages

UML fails to provide the kind of high-fidelity domain-specific modeling capabilities required by automated development. In other words, if you want to automate the mundane aspects of software development, then a one-size-fits-all generic visual modeling notation will not suffice. What you need is one or more DSLs (or notations) highly tuned for the task at hand - whether that task is the definition of Web services, the modeling of a hosting environment, or traditional object design.

A DSL is a modeling language that meets certain criteria. For example, a modeling language for developing Web services should contain concepts such as Web methods and protocols. The modeling language should also use meaningful names for concepts, such as fields and methods (for C#), rather than attributes and operations. The names should be drawn from the natural vocabulary of the domain.

The DSL idea is not new, and you may already be using a DSL for database manipulation (it's called SQL) or XML schema definition (it's called XSD).

Visual Studio 2010 Ultimate embraces this idea by providing the capability to create DSLs for specific tasks. DSLs enable visual models to be used not only for creating design documentation, but also for capturing information in a precise form that can be processed easily, raising the prospect of compiling models into code.

The only DSL that Visual Studio 2010 Ultimate provides "out of the box" is the UML support. Users have the capability to create their own DSLs using the DSL toolkit.

In that context, "your own problem domain" need not be technology-focused (such as how to model Web services or deployment infrastructures) but may instead be business-focused. You could devise a DSL highly tuned for describing banking systems or industrial processes.

FROM OBJECTS TO SERVICES

The design features provided by Visual Studio 2010 Ultimate have been influenced not only by Microsoft's vision for MDD but also by a technological evolution from object-based architectures, through (distributed) component-based architectures, to the SOAs, that represent the current best practice in distributed system design.

Understanding Objects and Compile-Time Reuse

When object-oriented programming (OOP) became popular in the mid-1990s, it was perceived as a panacea. In theory, by combining state (data) and behavior (functions) in a single code unit, you would have a perfectly reusable element - a cog to be used in a variety of machines.

The benefit was clear. There would be no more searching through thousands of lines of code to find every snippet that manipulated a date - remember the Y2K problem? By encapsulating all datemanipulation functionality in a single Date class, you would be able to solve such problems at a stroke.

Object orientation turned out not to be a panacea after all, for many reasons, including (but not limited to) bad project management (too-high expectations), poor programming (writing procedural code dressed up with objects), and inherent weaknesses in the approach (such as tight coupling between objects).

For the purposes of this discussion, let's concentrate on one problem in particular, which is the style of reuse that objects encouraged - what you might call copy-and-paste reuse.

Consider the following copy-and-paste reuse scenario. You discover that your colleague has coded an object - call it Book - that supports exactly the functionality you need in your application. You copy the entire source code for that object and paste it into your application.

Yes, it has saved you some time in the short term, but now look a little farther into the future.

Suppose the Book class holds fields for Title and ISBN, but in your application, you now need to record the author. You add a new field into your copy of the Book source code, and name that field Author.

In the meantime, your colleague has established the same need in his application, so he, too, modifies the Book source code (his copy) and has the foresight to record the author's name using two fields: AuthorSurname and AuthorFirstname.

Now, the single, reusable Book object exists in two variants, both of which are available for a third colleague to reuse. To make matters worse, those two variants are actually incompatible and cannot easily be merged, thanks to the differing representations of the author name.

Once you've compiled your application, you end up with a single executable file (.exe) from which the Book class is indivisible, so you can't change the behavior of the Book class - or substitute it for your colleague's variant - without recompiling the entire application (if you still have the source code, that is!).

As another example (which will be continued through the next sections), imagine you're writing a technical report within your company. You see one of the key topics written up in someone else's report, which has been sent to you by email. You copy that person's text into your document, change it a little, and now your company has two slightly different descriptions of the same topic in two separate reports.

Understanding Components and Deploy-Time Reuse

At this point, you might be shouting that individual classes could be compiled separately and then linked together into an application. Without the complete source code for the application, you could recode and replace an individual class without a full recompilation; just link in the new version.

Even better, how about compiling closely related (tightly coupled) classes into a single unit with only a few of those classes exposed to the outside world through well-defined interfaces? Now the entire sub-unit - let's call it a component - may be replaced with a newer version with which the application may be relinked and redeployed.

Better still, imagine that the individual components need not be linked together prior to deployment, but may be linked on-the-fly when the application is run. Then there is no need to redeploy the entire application; just apply the component updates. In technological terms, this describes DLLs (for those with a Microsoft background) or JAR files (for the Java folks). And, in .NET terms, this describes assemblies.

Continuing with the nonprogramming analogy, consider hyperlinking your technical report to the appropriate section of your colleague's report, and then distributing the two documents together, rather than copying your colleague's text into your document.

Understanding Distributed Components and Run-Time Reuse

Continuing with this line of thought, imagine that the components need not be redeployed on client devices at all. They are somehow just available on servers, to be invoked remotely when needed at run-time.

In the nonprogramming example, consider not having to distribute your colleague's report along with your own. In your own report, you would simply hyperlink to the relevant section in your colleague's document, which would be stored - and would remain - on an intranet server accessible to all recipients.

(Continues...)



Excerpted from Professional Application Lifecycle Management with Visual Studio 2010 by Mickey Gousset Brian Keller Ajoy Krishnamoorthy Martin Woodward Copyright © 2010 by John Wiley & Sons, Ltd. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

INTRODUCTION.

PART I ARCHITECT

CHAPTER 1 Introduction to Software Architecture.

CHAPTER 2 Top-down Design with Use Case Diagrams, Activity Diagrams, and Sequence Diagrams.

CHAPTER 3 Top-down Design with Component and Class Diagrams.

CHAPTER 4 Analyzing Applications Using Architecture Explorer.

CHAPTER 5 Using Layer Diagrams.

PART II DEVELOPER.

CHAPTER 6 Introduction to Software Development.

CHAPTER 7 Unit Testing with the Unit Test Framework.

CHAPTER 8 Managed Code Analysis and Code Metrics.

CHAPTER 9 Profi ling and Performance.

CHAPTER 10 Database Development, Testing, and Deployment.

CHAPTER 11 Introduction to IntelliTrace.

PART III TESTER.

CHAPTER 12 Introduction to Software Testing.

CHAPTER 13 Web Performance and Load Testing.

CHAPTER 14 Manual Testing.

CHAPTER 15 Coded User Interface Testing.

CHAPTER 16 Lab Management.

PART IV TEAM FOUNDATION SERVER.

CHAPTER 17 Introduction to Team Foundation Server.

CHAPTER 18 Team Foundation Architecture.

CHAPTER 19 Team Foundation Version Control.

CHAPTER 20 Branching and Merging.

CHAPTER 21 Team Foundation Build.

PART V PROJECT/PROCESS MANAGEMENT.

CHAPTER 22 Introduction to Project Management.

CHAPTER 23 Process Templates.

CHAPTER 24 Using Reports, Portals, and Dashboards.

CHAPTER 25 Agile Planning Using Planning Workbooks.

CHAPTER 26 Process Template Customizations.

INDEX.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)