xUnit Test Patterns: Refactoring Test Code [NOOK Book]

Overview

Automated testing is a cornerstone of agile development. An effective testing strategy will deliver new functionality more aggressively, accelerate user feedback, and improve quality. However, for many developers, creating effective automated tests is a unique and unfamiliar challenge.

xUnit Test Patterns is the definitive guide to writing automated tests using xUnit, the most popular unit testing framework in use today. Agile coach and test automation expert Gerard Meszaros ...

See more details below
xUnit Test Patterns: Refactoring Test Code

Available on NOOK devices and apps  
  • NOOK Devices
  • Samsung Galaxy Tab 4 NOOK
  • NOOK HD/HD+ Tablet
  • NOOK
  • NOOK Color
  • NOOK Tablet
  • Tablet/Phone
  • NOOK for Windows 8 Tablet
  • NOOK for iOS
  • NOOK for Android
  • NOOK Kids for iPad
  • PC/Mac
  • NOOK for Windows 8
  • NOOK for PC
  • NOOK for Mac

Want a NOOK? Explore Now

NOOK Book (eBook)
$34.49
BN.com price
(Save 42%)$59.99 List Price

Overview

Automated testing is a cornerstone of agile development. An effective testing strategy will deliver new functionality more aggressively, accelerate user feedback, and improve quality. However, for many developers, creating effective automated tests is a unique and unfamiliar challenge.

xUnit Test Patterns is the definitive guide to writing automated tests using xUnit, the most popular unit testing framework in use today. Agile coach and test automation expert Gerard Meszaros describes 68 proven patterns for making tests easier to write, understand, and maintain. He then shows you how to make them more robust and repeatable--and far more cost-effective.

Loaded with information, this book feels like three books in one. The first part is a detailed tutorial on test automation that covers everything from test strategy to in-depth test coding. The second part, a catalog of 18 frequently encountered "test smells," provides trouble-shooting guidelines to help you determine the root cause of problems and the most applicable patterns. The third part contains detailed descriptions of each pattern, including refactoring instructions illustrated by extensive code samples in multiple programming languages.

Topics covered include

  • Writing better tests--and writing them faster
  • The four phases of automated tests: fixture setup, exercising the system under test, result verification, and fixture teardown
  • Improving test coverage by isolating software from its environment using Test Stubs and Mock Objects
  • Designing software for greater testability
  • Using test "smells" (including code smells, behavior smells, and project smells) to spot problems and know when and how to eliminate them
  • Refactoring tests for greater simplicity, robustness, and execution speed
Read More Show Less

Product Details

  • ISBN-13: 9780132797467
  • Publisher: Pearson Education
  • Publication date: 6/4/2007
  • Sold by: Barnes & Noble
  • Format: eBook
  • Edition number: 1
  • Pages: 944
  • Sales rank: 475,807
  • File size: 8 MB

Meet the Author

Gerard Meszaros is Chief Scientist and Senior Consultant at ClearStream Consulting, a Calgary-based consultancy specializing in agile development. He has more than a decade of experience with automated unit testing frameworks and is a leading expert in test automation patterns, refactoring of software and tests, and design for testability.
Read More Show Less

Read an Excerpt

The Value of Self-Testing Code

In Chapter 4 of Refactoring Ref, Martin Fowler writes:

If you look at how most programmers spend their time, you'll find that writing code is actually a small fraction. Some time is spent figuring out what ought to be going on, some time is spent designing, but most time is spent debugging. I'm sure every reader can remember long hours of debugging, often long into the night. Every programmer can tell a story of a bug that took a whole day (or more) to find. Fixing the bug is usually pretty quick, but finding it is a nightmare. And then when you do fix a bug, there's always a chance that anther one will appear and that you might not even notice it until much later. Then you spend ages finding that bug.

Some software is very difficult to test manually. In these cases, we are often forced into writing test programs.

I recall a project I was working on in 1996. My task was to build an event framework that would let client software register for an event and be notified when some other software raised that event (the Observer GOF pattern). I could not think of a way to test this framework without writing some sample client software. I had about 20 different scenarios I needed to test, so I coded up each scenario with the requisite number of observers, events, and event raisers. At first, I logged what was occurring in the console and scanned it manually. This scanning became very tedious very quickly.

Being quite lazy, I naturally looked for an easier way to perform this testing. For each test I populated a Dictionary indexed by the expected event and the expected receiver of it with the name of the receiver as the value. When a particular receiver was notified of the event, it looked in the Dictionary for the entry indexed by itself and the event it had just received. If this entry existed, the receiver removed the entry. If it didn't, the receiver added the entry with an error message saying it was an unexpected event notification.

After running all the tests, the test program merely looked in the Dictionary and printed out its contents if it was not empty. As a result, running all of my tests had a nearly zero cost. The tests either passed quietly or spewed a list of test failures. I had unwittingly discovered the concept of a Mock Object (page 544) and a Test Automation Framework (page 298) out of necessity!

My First XP Project

In late 1999, I attended the OOPSLA conference, where I picked up a copy of Kent Beck's new book, eXtreme Programming Explained XPE. I was used to doing iterative and incremental development and already believed in the value of automated unit testing, although I had not tried to apply it universally. I had a lot of respect for Kent, whom I had known since the first PLoP1 conference in 1994. For all these reasons, I decided that it was worth trying to apply eXtreme Programming on a ClearStream Consulting project. Shortly after OOPSLA, I was fortunate to come across a suitable project for trying out this development approach—namely, an add-on application that interacted with an existing database but had no user interface. The client was open to developing software in a different way.

We started doing eXtreme Programming "by the book" using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks.

I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly, a trend emerged. While the time spent writing new tests and writing the production code seemed to be staying more or less constant, the amount of time spent modifying existing tests was increasing and the developers' estimates were going up as a result. When a developer asked me to pair on a task and we spent 90% of the time modifying existing tests to accommodate a relatively minor change, I knew we had to change something, and soon!

When we analyzed the kinds of compile errors and test failures we were experiencing as we introduced the new functionality, we discovered that many of the tests were affected by changes to methods of the system under test (SUT). This came as no surprise, of course. What was surprising was that most of the impact was felt during the fixture setup part of the test and that the changes were not affecting the core logic of the tests.

This revelation was an important discovery because it showed us that we had the knowledge about how to create the objects of the SUT scattered across most of the tests. In other words, the tests knew too much about nonessential parts of the behavior of the SUT. I say "nonessential" because most of the affected tests did not care about how the objects in the fixture were created; they were interested in ensuring that those objects were in the correct state. Upon further examination, we found that many of the tests were creating identical or nearly identical objects in their test fixtures.

The obvious solution to this problem was to factor out this logic into a small set of Test Utility Methods (page 599). There were several variations:

  • When we had a bunch of tests that needed identical objects, we simply created a method that returned that kind of object ready to use. We now call these Creation Methods (page 415).
  • Some tests needed to specify different values for some attribute of the object. In these cases, we passed that attribute as a parameter to the Parameterized Creation Method (see Creation Method).
  • Some tests wanted to create a malformed object to ensure that the SUT would reject it. Writing a separate Parameterized Creation Method for each attribute cluttered the signature of our Test Helper (page 643), so we created a valid object and then replaced the value of the One Bad Attribute (see Derived Value on page 718).
We had discovered what would become2 our first test automation patterns.

Later, when tests started failing because the database did not like the fact that we were trying to insert another object with the same key that had a unique constraint, we added code to generate the unique key programmatically. We called this variant an Anonymous Creation Method (see Creation Method) to indicate the presence of this added behavior.

Identifying the problem that we now call a Fragile Test (page 239) was an important event on this project, and the subsequent definition of its solution patterns saved this project from possible failure. Without this discovery we would, at best, have abandoned the automated unit tests that we had already built. At worst, the tests would have reduced our productivity so much that we would have been unable to deliver on our commitments to the client. As it turned out, we were able to deliver what we had promised and with very good quality. Yes, the testers3 still found bugs in our code because we were definitely missing some tests. Introducing the changes needed to fix those bugs, once we had figured out what the missing tests needed to look like, was a relatively straightforward process, however.

We were hooked. Automated unit testing and test-driven development really did work, and we have been using them consistently ever since.

As we applied the practices and patterns on subsequent projects, we have run into new problems and challenges. In each case, we have "peeled the onion" to find the root cause and come up with ways to address it. As these techniques have matured, we have added them to our repertoire of techniques for automated unit testing.

We first described some of these patterns in a paper presented at XP2001. In discussions with other participants at that and subsequent conferences, we discovered that many of our peers were using the same or similar techniques. That elevated our methods from "practice" to "pattern" (a recurring solution to a recurring problem in a context). The first paper on test smells RTC was presented at the same conference, building on the concept of code smells first described in Ref.

My Motivation

I am a great believer in the value of automated unit testing. I practiced software development without it for the better part of two decades, and I know that my professional life is much better with it than without it. I believe that the xUnit framework and the automated tests it enables are among the truly great advances in software development. I find it very frustrating when I see companies trying to adopt automated unit testing but being unsuccessful because of a lack of key information and skills.

As a software development consultant with ClearStream Consulting, I see a lot of projects. Sometimes I am called in early on a project to help clients make sure they "do things right." More often than not, however, I am called in when things are already off the rails. As a result, I see a lot of "worst practices" that result in test smells. If I am lucky and I am called early enough, I can help the client recover from the mistakes. If not, the client will likely muddle through less than satisfied with how TDD and automated unit testing worked—and the word goes out that automated unit testing is a waste of time.

In hindsight, most of these mistakes and best practices are easily avoidable given the right knowledge at the right time. But how do you obtain that knowledge without making the mistakes for yourself? At the risk of sounding self-serving, hiring someone who has the knowledge is the most time-efficient way of learning any new practice or technology. According to Gerry Weinberg's "Law of Raspberry Jam" SoC,4 taking a course or reading a book is a much less effective (though less expensive) alternative. I hope that by writing down a lot of these mistakes and suggesting ways to avoid them, I can save you a lot of grief on your project, whether it is fully agile or just more agile than it has been in the past—the "Law of Raspberry Jam" not withstanding.

Who This Book Is For

I have written this book primarily for software developers (programmers, designers, and architects) who want to write better tests and for the managers and coaches who need to understand what the developers are doing and why the developers need to be cut enough slack so they can learn to do it even better! The focus here is on developer tests and customer tests that are automated using xUnit. In addition, some of the higher-level patterns apply to tests that are automated using technologies other than xUnit. Rick Mugridge and Ward Cunningham have written an excellent book on Fit FitB, and they advocate many of the same practices.

Developers will likely want to read the book from cover to cover, but they should focus on skimming the reference chapters rather than trying to read them word for word. The emphasis should be on getting an overall idea of which patterns exist and how they work. Developers can then return to a particular pattern when the need for it arises. The first few elements (up to and include the "When to Use It" section) of each pattern should provide this overview.

Managers and coaches might prefer to focus on reading Part I, The Narratives, and perhaps Part II, The Test Smells. They might also need to read Chapter 18, Test Strategy Patterns, as these are decisions they need to understand and provide support to the developers as they work their way through these patterns. At a minimum, managers should read Chapter 3, Goals of Test Automation.

1 The Pattern Languages of Programs conference.

2 Technically, they are not truly patterns until they have been discovered by three independent project teams.

3 The testing function is sometimes referred to as "Quality Assurance." This usage is, strictly speaking, incorrect.

4 The Law of Raspberry Jam: "The wider you spread it, the thinner it gets."

Read More Show Less

Table of Contents

Visual Summary of the Pattern Language xvii Foreword xix Preface xxi Acknowledgments xxvi Introduction xxix Refactoring a Test xlv PART I: The Narratives 1 Chapter 1 A Brief Tour 3

About This Chapter 3

The Simplest Test Automation Strategy That Could Possibly Work 3

Development Process 4

Customer Tests 5

Unit Tests 6

Design for Testability 7

Test Organization 7

What's Next? 8

Chapter 2 Test Smells 9

About This Chapter 9

An Introduction to Test Smells 9

What's a Test Smell? 10

Kinds of Test Smells 10

What to Do about Smells? 11

A Catalog of Smells 12

The Project Smells 12

The Behavior Smells 13

The Code Smells 16

What's Next? 17

Chapter 3 Goals of Test Automation 19

About This Chapter 19

Why Test? 19

Economics of Test Automation 20

Goals of Test Automation 21

Tests Should Help Us Improve Quality 22

Tests Should Help Us Understand the SUT 23

Tests Should Reduce (and Not Introduce) Risk 23

Tests Should Be Easy to Run 25

Tests Should Be Easy to Write and Maintain 27

Tests Should Require Minimal Maintenance as

the System Evolves Around Them 29

What's Next? 29

Chapter 4 Philosophy of Test Automation 31

About This Chapter 31

Why Is Philosophy Important? 31

Some Philosophical Differences 32

Test First or Last? 32

Tests or Examples? 33

Test-by-Test or Test All-at-Once? 33

Outside-In or Inside-Out? 34

State or Behavior Verification? 36

Fixture Design Upfront or Test-by-Test? 36

When Philosophies Differ 37

My Philosophy 37

What's Next? 37

Chapter 5 Principles of Test Automation 39

About This Chapter 39

The Principles 39

What's Next? 48

Chapter 6 Test Automation Strategy 49

About This Chapter 49

What's Strategic? 49

Which Kinds of Tests Should We Automate? 50

Per-Functionality Tests 50

Cross-Functional Tests 52

Which Tools Do We Use to Automate Which Tests? 53

Test Automation Ways and Means 54

Introducing xUnit 56

The xUnit Sweet Spot 58

Which Test Fixture Strategy Do We Use? 58

What Is a Fixture? 59

Major Fixture Strategies 60

Transient Fresh Fixtures 61

Persistent Fresh Fixtures 62

Shared Fixture Strategies 63

How Do We Ensure Testability? 65

Test Last--At Your Peril 65

Design for Testability--Upfront 65

Test-Driven Testability 66

Control Points and Observation Points 66

Interaction Styles and Testability Patterns 67

Divide and Test 71

What's Next? 73

Chapter 7 xUnit Basics 75

About This Chapter 75

An Introduction to xUnit 75

Common Features 76

The Bare Minimum 76

Defining Tests 76

What's a Fixture? 78

Defining Suites of Tests 78

Running Tests 79

Test Results 79

Under the xUnit Covers 81

Test Commands 82

Test Suite Objects 82

xUnit in the Procedural World 82

What's Next? 83

Chapter 8 Transient Fixture Management 85

About This Chapter 85

Test Fixture Terminology 86

What Is a Fixture? 86

What Is a Fresh Fixture? 87

What Is a Transient Fresh Fixture? 87

Building Fresh Fixtures 88

In-line Fixture Setup 88

Delegated Fixture Setup 89

Implicit Fixture Setup 91

Hybrid Fixture Setup 93

Tearing Down Transient Fresh Fixtures 93

What's Next? 94

Chapter 9 Persistent Fixture Management 95

About This Chapter 95

Managing Persistent Fresh Fixtures 95

What Makes Fixtures Persistent? 95

Issues Caused by Persistent Fresh Fixtures 96

Tearing Down Persistent Fresh Fixtures 97

Avoiding the Need for Teardown 100

Dealing with Slow Tests 102

Managing Shared Fixtures 103

Accessing Shared Fixtures 103

Triggering Shared Fixture Construction 104

What's Next? 106

Chapter 10 Result Verification 107

About This Chapter 107

Making Tests Self-Checking 107

Verify State or Behavior? 108

State Verification 109

Using Built-in Assertions 110

Delta Assertions 111

External Result Verification 111

Verifying Behavior 112

Procedural Behavior Verification 113

Expected Behavior Specification 113

Reducing Test Code Duplication 114

Expected Objects 115

Custom Assertions 116

Outcome-Describing Verification Method 117

Parameterized and Data-Driven Tests 118

Avoiding Conditional Test Logic 119

Eliminating "if" Statements 120

Eliminating Loops 121

Other Techniques 121

Working Backward, Outside-In 121

Using Test-Driven Development to

Write Test Utility Methods 122

Where to Put Reusable Verification Logic? 122

What's Next? 123

Chapter 11 Using Test Doubles 125

About This Chapter 125

What Are Indirect Inputs and Outputs? 125

Why Do We Care about Indirect Inputs? 126

Why Do We Care about Indirect Outputs? 126

How Do We Control Indirect Inputs? 128

How Do We Verify Indirect Outputs? 130

Testing with Doubles 133

Types of Test Doubles 133

Providing the Test Double 140

Configuring the Test Double 141

Installing the Test Double 143

Other Uses of Test Doubles 148

Endoscopic Testing 149

Need-Driven Development 149

Speeding Up Fixture Setup 149

Speeding Up Test Execution 150

Other Considerations 150

What's Next? 151

Chapter 12 Organizing Our Tests 153

About This Chapter 153

Basic xUnit Mechanisms 153

Right-Sizing Test Methods 154

Test Methods and Testcase Classes 155

Testcase Class per Class 155

Testcase Class per Feature 156

Testcase Class per Fixture 156

Choosing a Test Method Organization Strategy 158

Test Naming Conventions 158

Organizing Test Suites 160

Running Groups of Tests 160

Running a Single Test 161

Test Code Reuse 162

Test Utility Method Locations 163

TestCase Inheritance and Reuse 163

Test File Organization 164

Built-in Self-Test 164

Test Packages 164

Test Dependencies 165

What's Next? 165

Chapter 13 Testing with Databases 167

About This Chapter 167

Testing with Databases 167

Why Test with Databases? 168

Issues with Databases 168

Testing without Databases 169

Testing the Database 171

Testing Stored Procedures 172

Testing the Data Access Layer 172

Ensuring Developer Independence 173

Testing with Databases (Again!) 173

What's Next? 174

Chapter 14 A Roadmap to Effective Test Automation 175

About This Chapter 175

Test Automation Difficulty 175

Roadmap to Highly Maintainable Automated Tests 176

Exercise the Happy Path Code 177

Verify Direct Outputs of the Happy Path 178

Verify Alternative Paths 178

Verify Indirect Output Behavior 179

Optimize Test Execution and Maintenance 180

What's Next? 181

PART II: The Test Smells 183 Chapter 15 Code Smells 185

Obscure Test 186

Conditional Test Logic 200

Hard-to-Test Code 209

Test Code Duplication 213

Test Logic in Production 217

Chapter 16 Behavior Smells 223

Assertion Roulette 224

Erratic Test 228

Fragile Test 239

Frequent Debugging 248

Manual Intervention 250

Slow Tests 253

Chapter 17 Project Smells 259

Buggy Tests 260

Developers Not Writing Tests 263

High Test Maintenance Cost 265

Production Bugs 268

PART III: The Patterns 275 Chapter 18 Test Strategy Patterns 277

Recorded Test 278

Scripted Test 285

Data-Driven Test 288

Test Automation Framework 298

Minimal Fixture 302

Standard Fixture 305

Fresh Fixture 311

Shared Fixture 317

Back Door Manipulation 327

Layer Test 337

Chapter 19 xUnit Basics Patterns 347

Test Method 348

Four-Phase Test 358

Assertion Method 362

Assertion Message 370

Testcase Class 373

Test Runner 377

Testcase Object 382

Test Suite Object 387

Test Discovery 393

Test Enumeration 399

Test Selection 403

Chapter 20 Fixture Setup Patterns 407

In-line Setup 408

Delegated Setup 411

Creation Method 415

Implicit Setup 424

Prebuilt Fixture 429

Lazy Setup 435

Suite Fixture Setup 441

Setup Decorator 447

Chained Tests 454

Chapter 21 Result Verification Patterns 461

State Verification 462

Behavior Verification 468

Custom Assertion 474

Delta Assertion 485

Guard Assertion 490

Unfinished Test Assertion 494

Chapter 22 Fixture Teardown Patterns 499

Garbage-Collected Teardown 500

Automated Teardown 503

In-line Teardown 509

Implicit Teardown 516

Chapter 23 Test Double Patterns 521

Test Double 522

Test Stub 529

Test Spy 538

Mock Object 544

Fake Object 551

Configurable Test Double 558

Hard-Coded Test Double 568

Test-Specific Subclass 579

Chapter 24 Test Organization Patterns 591

Named Test Suite 592

Test Utility Method 599

Parameterized Test 607

Testcase Class per Class 617

Testcase Class per Feature 624

Testcase Class per Fixture 631

Testcase Superclass 638

Test Helper 643

Chapter 25 Database Patterns 649

Database Sandbox 650

Stored Procedure Test 654

Table Truncation Teardown 661

Transaction Rollback Teardown 668

Chapter 26 Design-for-Testability Patterns 677

Dependency Injection 678

Dependency Lookup 686

Humble Object 695

Test Hook 709

Chapter 27 Value Patterns 713

Literal Value 714

Derived Value 718

Generated Value 723

Dummy Object 728

PART IV: Appendixes 733 Appendix A Test Refactorings 735 Appendix B xUnit Terminology 741 Appendix C xUnit Family Members 747 Appendix D Tools 753 Appendix E Goals and Principles 757 Appendix F Smells, Aliases, and Causes 761 Appendix G Patterns, Aliases, and Variations 767 Glossary 785 References 819 Index 835
Read More Show Less

Preface

The Value of Self-Testing Code

In Chapter 4 of Refactoring Ref, Martin Fowler writes:

If you look at how most programmers spend their time, you'll find that writing code is actually a small fraction. Some time is spent figuring out what ought to be going on, some time is spent designing, but most time is spent debugging. I'm sure every reader can remember long hours of debugging, often long into the night. Every programmer can tell a story of a bug that took a whole day (or more) to find. Fixing the bug is usually pretty quick, but finding it is a nightmare. And then when you do fix a bug, there's always a chance that anther one will appear and that you might not even notice it until much later. Then you spend ages finding that bug.

Some software is very difficult to test manually. In these cases, we are often forced into writing test programs.

I recall a project I was working on in 1996. My task was to build an event framework that would let client software register for an event and be notified when some other software raised that event (the Observer GOF pattern). I could not think of a way to test this framework without writing some sample client software. I had about 20 different scenarios I needed to test, so I coded up each scenario with the requisite number of observers, events, and event raisers. At first, I logged what was occurring in the console and scanned it manually. This scanning became very tedious very quickly.

Being quite lazy, I naturally looked for an easier way to perform this testing. For each test I populated a Dictionary indexed by the expected event and the expected receiver of it with the name of the receiver as the value. When a particular receiver was notified of the event, it looked in the Dictionary for the entry indexed by itself and the event it had just received. If this entry existed, the receiver removed the entry. If it didn't, the receiver added the entry with an error message saying it was an unexpected event notification.

After running all the tests, the test program merely looked in the Dictionary and printed out its contents if it was not empty. As a result, running all of my tests had a nearly zero cost. The tests either passed quietly or spewed a list of test failures. I had unwittingly discovered the concept of a Mock Object (page 544) and a Test Automation Framework (page 298) out of necessity!

My First XP Project

In late 1999, I attended the OOPSLA conference, where I picked up a copy of Kent Beck's new book, eXtreme Programming Explained XPE. I was used to doing iterative and incremental development and already believed in the value of automated unit testing, although I had not tried to apply it universally. I had a lot of respect for Kent, whom I had known since the first PLoP1 conference in 1994. For all these reasons, I decided that it was worth trying to apply eXtreme Programming on a ClearStream Consulting project. Shortly after OOPSLA, I was fortunate to come across a suitable project for trying out this development approach--namely, an add-on application that interacted with an existing database but had no user interface. The client was open to developing software in a different way.

We started doing eXtreme Programming "by the book" using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks.

I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly, a trend emerged. While the time spent writing new tests and writing the production code seemed to be staying more or less constant, the amount of time spent modifying existing tests was increasing and the developers' estimates were going up as a result. When a developer asked me to pair on a task and we spent 90% of the time modifying existing tests to accommodate a relatively minor change, I knew we had to change something, and soon!

When we analyzed the kinds of compile errors and test failures we were experiencing as we introduced the new functionality, we discovered that many of the tests were affected by changes to methods of the system under test (SUT). This came as no surprise, of course. What was surprising was that most of the impact was felt during the fixture setup part of the test and that the changes were not affecting the core logic of the tests.

This revelation was an important discovery because it showed us that we had the knowledge about how to create the objects of the SUT scattered across most of the tests. In other words, the tests knew too much about nonessential parts of the behavior of the SUT. I say "nonessential" because most of the affected tests did not care about how the objects in the fixture were created; they were interested in ensuring that those objects were in the correct state. Upon further examination, we found that many of the tests were creating identical or nearly identical objects in their test fixtures.

The obvious solution to this problem was to factor out this logic into a small set of Test Utility Methods (page 599). There were several variations:

  • When we had a bunch of tests that needed identical objects, we simply created a method that returned that kind of object ready to use. We now call these Creation Methods (page 415).
  • Some tests needed to specify different values for some attribute of the object. In these cases, we passed that attribute as a parameter to the Parameterized Creation Method (see Creation Method).
  • Some tests wanted to create a malformed object to ensure that the SUT would reject it. Writing a separate Parameterized Creation Method for each attribute cluttered the signature of our Test Helper (page 643), so we created a valid object and then replaced the value of the One Bad Attribute (see Derived Value on page 718).
We had discovered what would become2 our first test automation patterns.

Later, when tests started failing because the database did not like the fact that we were trying to insert another object with the same key that had a unique constraint, we added code to generate the unique key programmatically. We called this variant an Anonymous Creation Method (see Creation Method) to indicate the presence of this added behavior.

Identifying the problem that we now call a Fragile Test (page 239) was an important event on this project, and the subsequent definition of its solution patterns saved this project from possible failure. Without this discovery we would, at best, have abandoned the automated unit tests that we had already built. At worst, the tests would have reduced our productivity so much that we would have been unable to deliver on our commitments to the client. As it turned out, we were able to deliver what we had promised and with very good quality. Yes, the testers3 still found bugs in our code because we were definitely missing some tests. Introducing the changes needed to fix those bugs, once we had figured out what the missing tests needed to look like, was a relatively straightforward process, however.

We were hooked. Automated unit testing and test-driven development really did work, and we have been using them consistently ever since.

As we applied the practices and patterns on subsequent projects, we have run into new problems and challenges. In each case, we have "peeled the onion" to find the root cause and come up with ways to address it. As these techniques have matured, we have added them to our repertoire of techniques for automated unit testing.

We first described some of these patterns in a paper presented at XP2001. In discussions with other participants at that and subsequent conferences, we discovered that many of our peers were using the same or similar techniques. That elevated our methods from "practice" to "pattern" (a recurring solution to a recurring problem in a context). The first paper on test smells RTC was presented at the same conference, building on the concept of code smells first described in Ref.

My Motivation

I am a great believer in the value of automated unit testing. I practiced software development without it for the better part of two decades, and I know that my professional life is much better with it than without it. I believe that the xUnit framework and the automated tests it enables are among the truly great advances in software development. I find it very frustrating when I see companies trying to adopt automated unit testing but being unsuccessful because of a lack of key information and skills.

As a software development consultant with ClearStream Consulting, I see a lot of projects. Sometimes I am called in early on a project to help clients make sure they "do things right." More often than not, however, I am called in when things are already off the rails. As a result, I see a lot of "worst practices" that result in test smells. If I am lucky and I am called early enough, I can help the client recover from the mistakes. If not, the client will likely muddle through less than satisfied with how TDD and automated unit testing worked--and the word goes out that automated unit testing is a waste of time.

In hindsight, most of these mistakes and best practices are easily avoidable given the right knowledge at the right time. But how do you obtain that knowledge without making the mistakes for yourself? At the risk of sounding self-serving, hiring someone who has the knowledge is the most time-efficient way of learning any new practice or technology. According to Gerry Weinberg's "Law of Raspberry Jam" SoC,4 taking a course or reading a book is a much less effective (though less expensive) alternative. I hope that by writing down a lot of these mistakes and suggesting ways to avoid them, I can save you a lot of grief on your project, whether it is fully agile or just more agile than it has been in the past--the "Law of Raspberry Jam" not withstanding.

Who This Book Is For

I have written this book primarily for software developers (programmers, designers, and architects) who want to write better tests and for the managers and coaches who need to understand what the developers are doing and why the developers need to be cut enough slack so they can learn to do it even better! The focus here is on developer tests and customer tests that are automated using xUnit. In addition, some of the higher-level patterns apply to tests that are automated using technologies other than xUnit. Rick Mugridge and Ward Cunningham have written an excellent book on Fit FitB, and they advocate many of the same practices.

Developers will likely want to read the book from cover to cover, but they should focus on skimming the reference chapters rather than trying to read them word for word. The emphasis should be on getting an overall idea of which patterns exist and how they work. Developers can then return to a particular pattern when the need for it arises. The first few elements (up to and include the "When to Use It" section) of each pattern should provide this overview.

Managers and coaches might prefer to focus on reading Part I, The Narratives, and perhaps Part II, The Test Smells. They might also need to read Chapter 18, Test Strategy Patterns, as these are decisions they need to understand and provide support to the developers as they work their way through these patterns. At a minimum, managers should read Chapter 3, Goals of Test Automation.

1 The Pattern Languages of Programs conference.

2 Technically, they are not truly patterns until they have been discovered by three independent project teams.

3 The testing function is sometimes referred to as "Quality Assurance." This usage is, strictly speaking, incorrect.

4 The Law of Raspberry Jam: "The wider you spread it, the thinner it gets."

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing 1 Customer Reviews
  • Posted May 14, 2009

    Great, easy to read and invaluable! Definitely a relevant TDD/Patterns reference that's a must have

    x/Unit patterns is a great book that fills a void I unconsciously would run into in projects using TDD. While several great books have been written about how to design and refactor our application code not much critical practical attention has been given to how to write test code. Most of it has been "do what I do" or heated discussions on why one strategy is better than another. I would often see "smells" end up in my test code, which I would attempt to refactor out. Yet, since I am still learning TDD I would often re-refactor my TDD code because I would often change my mind about why I made a change. I would also question whether or not I really was improving my project code by refactoring my TDD code.

    xUnit Test Patterns definitely is a great addition to this space and fills a much needed gap. It helps put much needed background, concrete analysis and details around these areas. It will easily become a staple in the modern developer's reference collection.

    This is not an intro to TDD book. If you do not use TDD and are wondering just what it is I would suggest you look elsewhere. If I had been given this book when I was entering the TDD space I would have been overwhelmed.

    If you have used TDD on a project or two, with varying success, this would be a great book. It's a very easy read with a basic understanding of TDD. It opened my eyes on exactly why TAD (test after development) was much worse than simply "don't do it that way". It allows me to help explain TDD much better than saying, "just because you're supposed to". Gerard explains the very fundamental reason behind the various hot & controversial strategies and where they may each be used with confidence. Of course, he explains why it may be very bad to use in certain situations, which are usually producing smells in our test code.

    In the first half Gerard gives a very thorough tour through the TDD space including the benefits of using TDD, various strategies and their pros/cons, details of how most xUnit frameworks work and so on. Most of this is probably fairly familiar ground for those who are intermediate in TDD but it's a very quick read through. It filled in a few holes for me, but most importantly, like other refactoring/patterns books it helps put a concrete framework and names around the various methods. It helps facilitate great conversations where developers using TDD can speak on a common ground, an area that, at least in my circles, has been lacking.

    The 2nd half details the actual patterns. Much in Fowler-academic pattern style, each pattern is listed with its name, what it attempts to accomplish, pros/cons and samples of use. He includes a vast amount of patterns, many of which I have ran into yet not been able to put my finger on why I decided to use it in one project and not another. He also makes a great case why the same pattern may be very useful in one context but an anti-pattern in another. Looking at the various pros/cons I really began to fill out my arsenal, if you will, with strategies I used to dismiss, however, in certain difficult contexts they would have actually fit the need perfectly.

    If you have been using TDD in your projects, with whatever level of success, I would definitely recommend you add this book to your collection. As I said above, it's a quick yet detailed read and adds much needed tools to your TDD tool belt.

    1 out of 1 people found this review helpful.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing 1 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)