Testing Computer Softwareby Cem Kaner, Jack L. Falk, Hung Q. Nguyen
This book will teach you how to test computer software under real-world conditions. The authors have all been test managers and software development managers at well-known Silicon Valley software companies. Successful consumer software companies have learned how to produce high-quality products under tight time and budget constraints. The book explains the testing side of that success.
Who this book is for:*Testers and Test Managers*Project Managers-Understand the timeline, depth of investigation, and quality of communication to hold testers accountable for.*Programmers-Gain insight into the sources of errors in your code, understand what tests your work will have to pass, and why testers do the things they do.*Students-Train for an entry-level position in software development.
What you will learn:*How to find important bugs quickly*How to describe software errors clearly*How to create a testing plan with a minimum of paperwork*How to design and use a bug-tracking system*Where testing fits in the product development process*How to test products that will be translated into other languages*How to test for compatibility with devices, such as printers*What laws apply to software quality
JACK FALK consults on software quality management and software engineering management. Jack is certified in Software Quality Engineering by the American Society of Quality. He is Vice Chair of the Santa Clara Valley Software Quality Association and an active participant in the Los Altos Workshops on Software Testing.
HUNG Q. NGUYEN is Founder, President, and CEO of softGear technology. He has worked in the computer software and hardware industries, holding management positionsinengineering, quality assurance, testing, product development, and information technology, as well as making significant contributions as a tester and programmer. He is an ASQ-Certified Quality Engineer, and a senior member and San Francisco Section Certification Chairman of the American Society for Quality.
"Deep insight and a great deal of experience is contained in this book" (Database & Network Journal, Vol 30/5 2000)
Read an Excerpt
Chapter 6: THE PROBLEM TRACKING SYSTEM
THE REASON FOR THIS CHAPTERIn Chapter 5, we described how a bug is reported. Here we describe what happens to the Problem Reportafter you report it. This chapter provides the basic design of a problem tracking database and puts it inperspective. It describes the system in terms of the flow of information (bug reports) through it and the needs ofthe people who use it. We provide sample forms and reports to illustrate one possible implementation of thesystem. You could build many other, different, systems that would support the functional goals we lay out forthe database.
NOTEUp to now, the "you" - that we've written to has been a novice tester. This chapter marks a shift in position. Fromthis point onward, we're writing to a tester who's ready to lead her own project. We write to you here assumingthat you are a project's test team leader, and that you have a significant say in the design of the tracking system.If you aren't there yet, read on anyway. This chapter will put the tracking system in perspective, whatever yourexperience level.
ALSO NOTEIn our analysis of the issues involved in reporting information about people, we assume that you work in atypically managed software company. In this environment, your group is the primary user of the tracking systemand the primary decision maker about what types of summary and statistical reports are circulated. Under thesecircumstances, some types of reports that you can generate can be taken badly, as overreaching by a low leveldepartment in the company. Others will be counterproductive for other reasons, discussed below.
Butthe analysis runs differently if you work for a company that follows an executive-driven qualityimprovement program. In these companies, senior managers play a much more active role in setting qualitystandards, and they make broader use of quality reporting systems, including bug tracking Information. Thetracking system is much more of a management tool than the primarily project-level quality control tool that wediscuss in this chapter. These companies also pay attention to the problems inherent in statistical monitoring ofemployee behavior and to the risk of distracting a Quality improvement group by forcing it to collect too muchdata. Deming (1982) discusses the human dynamics of information reporting in these companies and the stepsexecutives must take to make these systems work.
OVERVIEWThe first sections analyze how an effective tracking system isused:
- We start with a general overview of benefits and organizational risks created by the system.
- Then we consider the prime objective of the system, its core underlying purpose. As we see it, the primeobjective is getting those bugs that should be fixed, fixed.
- To achieve Its objective; id system must be capable of certain tasks. We identify tourrequirements.
- Now look at the system in practice. Once you submit the report, what happens to it? Howdoes it get resolved? How does the tracking system itself help this process?
- Finally, we consider the system's users. Many different people in your company use thissystem, for different reasons, We ask here, what do they get from the system, what otherinformation do they want, and what should you provide? There are traps here for theunwary.
- We start with a detailed description of key forms and reports that most tracking systems provide.
- Now you understand problem reporting and the overall tracking system design. We suggestsome fine points - ways to structure the system to increase report effectiveness and minimizeinterpersonal conflicts.
- The last section in this group passes on a few very specific tips on setting up the online version of thereport form.
Problem Reports are a tester's primary work product. The problem tracking system and procedures will have moreimpact on testers reports' effectiveness than any other system or procedure.
You use a problem tracking system to report bugs, file them, retrieve files, and write summary reports about them. Agood system fosters accountability and communication about the bugs. Unless the number of reports is trivial, you needan organized system. Too many software groups still use pen-and-paper tracking procedures or computer-basedsystems that they consider awkward and primitive. It's not so hard to build a good tracking system and it's worth it,even for small projects.
This chapter assumes your company is big enough to have a test manager, marketing manager, project manager,technical support staff, etc. It's easier for us to identify roles and bring out some fine points this way. Be aware, though,that we've seen the same interactions in two-person research projects and development partnerships. Each person wearsmany hats, but as long as one tests the work of the other, they face the same issues. If you work in a small team, even asignificant two person class project in school (such as a full year, senior year project), we recommend that you apply asmuch of this system and the thinking behind it as you can.
This chapter describes a problem tracking system that we've found successful. We include the main data entry form,standard reports, and special implementation notes-enough for you to code your own system using any good databaseprogram. Beyond these technical notes, we consider the system objectives, its place in your company, and the effect ofthe system on the quality of your products.
The key issues in a problem tracking system are political, not technical. The tracking system is an organizationalintervention, every bit as much as it is a technical tool. Here are some examples of the system's political power and theorganizational issues it raises:
- The system introduces project accountability. A good tracking system takes information that hastraditionally been privately held by the project manager, a few programmers, and (maybe) theproduct manager, and makes it public (i.e., available to many people at different levels in thecompany). Throughout the last third of the project, the system provides an independent realitycheck on the project's status and schedule. It provides a list of key tasks that must be completed(bugs that must be fixed) before the product is finished. The list reflects the current quality of theproduct. And anyone can monitor progress against the list over a few weeks for a further check onthe pace of project progress.
- As the system is used, significant personal and control issues surface. These issues are standard onesbetween testing, programming, and other groups in the company, but a good tracking system oftenhighlights and focuses them. Especially on a network, a good system captures most of thecommunication between the testers and the programmers over individual bugs. The result is arevealing record that can highlight abusive, offensive, or time-wasting behavior by individualprogrammers or testers or by groups.
Here are some of the common issues:
- Who is allowed to report problems? Who decides whether a report makes it into the database? Who controls the report's wording, categorization, and severity?
- Who is allowed to query the database or to see the problem summaries or statistics?
- Who controls the final presentation of quality-related data and other progress statistics available from the database?
- Who is allowed to hurt whose feelings? Why?
- Who is allowed to waste whose time? Do programmers demand excessive documentation and support for each bug? Do testers provide so little information with Problem Reports that the programmers have to spend most of their time recreating and narrowing test cases?
- How much disagreement over quality issues is tolerable?
- Who makes the decisions about the product's quality? Is there an appeal process?Who gets to raise the appeal, arguing that a particular bug or design issue should not be set aside? Who makes the final decision?
- The system can monitor individual performance.It's easy to crank out personal statistics from thetracking system, such as the average number of bugs reported per day for each tester, or theaverage number of bugs per programmer per week, or each programmer's average delay before fixinga bug, etc. These numbers look meaningful. Senior managers often love them. They're often handyfor highlighting personnel problems or even for building a case to fire someone. However, if thesystem is used this way, some very good people will find it oppressive, and some not necessarilygood people will find ways to manipulate the system to appear more productive.
- The system provides ammunition for cross-group wars. Suppose that Project X is further behindschedule than its manager cares to admit. The test group manager, or managers of other projectsthat compete with Project X for resources, can use tracking system statistics to prove that X willconsume much more time, staff and money than anticipated. To a point, this is healthy accountability.Beyond that point, someone is trying to embarrass X's manager, to aggrandize themselves, or to get theproject cancelled unfairly - a skilled corporate politician can use statistics to make a project appear muchworse off than it is.
THE PRIME OBJECTIVE OF A PROBLEM TRACKING SYSTEM
A problem tracking system exists in the service of getting the bugs thatshould be fixed, fixed. Anything that doesn't directly support this purposeis a side issue.
Some other objectives, including some management reporting, are fully compatible with the system's prime objective.But each time a new task or objective is proposed for the system, evaluate it against this one. Anything that detractsfrom the system's prime objective should be excluded.
THE TASKS OF THE SYSTEMTo achieve the system objective, the designer and her management must ensure that:
- Anyone who needs to know about a problem should learn of it soon after it's reported.
- No error will go unfixed merely because someone forgot about it.
- No error will go unfixed on the whim of a single programmer.
- A minimum of errors will go unfixed merely because of poor communication.
Meet the Author
Cem Kaner consults on technical and software development management issues, and teaches about software testing at local universities and at several software companies. He also practices law, usually representing individual developers, small development services companies, and customers. He founded and hosts the Los Altos Workshops on Software Testing. Kaner began working with computers in 1976 as a graduate student of Human Experimental Psychology. He came to Silicon Valley in 1983 and has worked as a programmer, human factors analyst, user interface designer, software salesperson, associate in an organization development consulting firm, technical writer, software testing technology team leader, manager of software testing, manager of technical publications, software development manager, and director of documentation and software testing. Kaner has also served pro bono as a Deputy District Attorney, and as an investigator/mediator for a California county's Consumer Affairs Department. He is actively involved in legislative work affecting the law of software quality and is the senior author of Bad Software: What To Do When Software Fails (Wiley, 1998). Kaner holds a B.A. (Math, Philosophy), a J.D. (law degree), and a Ph.D. (Psychology) and is Certified in Quality Engineering by the American Society for Quality. You can reach him at email@example.com and see his work on software testing at www.kaner.com and on software consumer protection at www.badsoftware.com.
Jack Falk consults on software quality management and software engineering management. He has served as Director of Quality Assurance and Engineering Services; has managed development of multimedia softwareproducts, software development operations, and engineering support services; and has managed testing of handheld OS software, software development kits (SDKs), entertainment, graphics, and financial application software. He has also managed distribution and operations (at the Group Manager or Director level) for several prominent retailers. Jack is Certified in Software Quality Engineering by the American Society of Quality. He holds an AA degree in Business from City College of San Francisco and has completed a wide range of additional business and software development courses. He is a level 2 Certified Equity Professional and a licensed Real Estate Appraiser. He is Vice Chair of the Santa Clara Valley Software Quality Association and an active participant in the Los Altos Workshops on Software Testing. You can reach Jack at firstname.lastname@example.org.
Hung 0. Nguyen is Founder, President, and CEO of softGear technology, a Silicon Valley company whose mission is to help software development organizations deliver the best possible quality products while juggling limited resources and schedule constraints. Founded in 1994, his company has successfully achieved its goals through the offering of several outsource test engineering and testing programs, software testing support products, and a series of practical software-testing training courses. Hung also develops training materials and teaches software testing to the public at universities as well as at numerous well-known domestic and international software companies. He has worked in the computer software and hardware industries, holding management positions in engineering, quality-assurance, testing, product development, and information technology, as well as making significant contributions as a tester and programmer. Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College. He is an ASQ-Certified Quality Engineer and a senior member and San Francisco Section Certification Chairman of the American Society for Quality. Hung lives in Foster City, California with his wife, Heather, and his two children, Wendy and Denny. You can reach Hung at email@example.com or obtain more information about softGear technology and his work at www.softgeartech.com.
Most Helpful Customer Reviews
See all customer reviews