Writing Solid Code

Writing Solid Code

5.0 2
by Steve Maguire
     
 

For professional intermediates to advanced C programmers who develop software, here is a focused and practical book based on writing bug-free programs in C. Includes practical solutions to detect mistakes before they become a costly problem.See more details below

Overview

For professional intermediates to advanced C programmers who develop software, here is a focused and practical book based on writing bug-free programs in C. Includes practical solutions to detect mistakes before they become a costly problem.

Product Details

ISBN-13:
9781556155512
Publisher:
Microsoft Press
Publication date:
01/01/1993
Series:
Code Series
Pages:
256
Product dimensions:
7.37(w) x 9.07(h) x 0.85(d)

Read an Excerpt


Chapter 8: The Rest IsAttitude

Throughout this book, I've talked about techniques you can use to detect and to prevent bugs. Using these techniques won't guarantee that you'll write bug-free code any more than having a team of skillful ball players will guarantee that you'll have a winning team. The other necessary ingredient is a set of good habits and attitudes.

Would you expect those ball players to have a winning season if they grumbled all day about having to practice? What if they were constantly angry because their salary was a meager $1.2 million per year or were always worried about being traded or cut? These concerns have nothing to do with playing ball, but they have everything to do with how well the players perform.

You can use all of the suggestions in this book to help eliminate bugs, but if you have "buggy" attitudes or coding habits that cause bugs, you're going to have a tough time writing bug-free code.

In this chapter, I'll talk about some of the most common barriers to writing bug-free code. All are easily correctable; often all you need to do is become aware of them.

For My Next Trick, Disappearing Bugs

How many times have you asked somebody about a bug they were fixing and heard in response, "Oh, that bug went away"? I said that once, many years ago, to my very first manager. He asked me if I'd managed to track down a bug in the Apple 11 database product we were wrapping up, and I said, "Oh, that bug went away." The manager paused for a moment and then asked me to follow him into his office, where we both sat down.

"Steve, what do you mean when you say 'the bug went away'?"

"Well, you know, I went through the steps in the bug report, and the bug didn't show up."

My manager leaned back in his chair. "So what do you suppose happened to that bug?"

"I don't know," I said. "I guess it already got fixed."

"But you don't know that, do you?"

"No, I guess I don't," I admitted.

"Well don't you think you had better find out what really happened? After all, you're working with a computer; bugs don't fix themselves."

That manager went on to explain the three reasons bugs disappear: The bug report was wrong, the bug has been fixed by another programmer, or the bug still exists but isn't apparent. His final words on the subject were to remind me that, as a professional programmer, it was my job to determine which of the three cases described my disappearing bug and to act accordingly. In no case was I to simply ignore the bug because it had disappeared.

That advice was valuable in the days of CP/M and Apple IIs when I first heard it, it was valuable in the decades before that, and it's still valuable today. I didn't realize how valuable the advice was until I became a project lead myself and found that it was common for programmers to happily assume that the testers were wrong or that somebody had already fixed the bug in question.

Bugs will often disappear simply because you and the tester are using different versions of the program. If a bug doesn't show up in the code you're using, dig up the version the tester was using. If the bug still doesn't show up, notify the testing team. If the bug does show up, track it down in those earlier sources, decide how to fix it, and then look at the current sources to see why the bug disappeared. Very often, the bug still exists but surrounding changes have hidden it. You need to understand why the bug disappeared so that you can take appropriate steps to correct it.

Bugs don't just "go away."


Too Much Effort?
Programmers sometimes grumble when I ask them to drag out older sources to look for a reported bug; it seems like a waste of time. If it seems that way to you, consider that you're not reverting to earlier sources on a whim. You're looking at those sources because there is an excellent chance that there is a bug and looking at those older sources is the most efficient way to track it down.

Suppose you isolate the bug in those earlier sources and find that the bug has indeed been fixed in the current sources. Have you wasted your time? Hardly. After all, which is better, closing the bug as "fixed" or labeling it as "nonreproducible" and sending it back to the testing group? What will the testers do then? They certainly can't assume that the bug has been fixed-their only two options are to spend additional time trying to reproduce the bug or to leave it marked as nonreproducible and hope that it was fixed. Both options are a lot worse than tracking down the bug in earlier sources and closing the bug as "fixed."


A Fix In time Saves Nine

When I first joined the Microsoft Excel group, the practice was to postpone all bug- fixes to the end of the project. It's not that the group had a cast-iron scroll staked to a wall that read, "Thou shalt not fix bugs until all features have been implemented," but there was always pressure to keep to the schedule and knock out features. At the same time, there was very little pressure to fix bugs. I was once told, "Unless a bug crashes the system or holds up the testing group, don't worry about fixing it. We'll have plenty of time to fix bugs later, after we complete the scheduled features." In short, fixing bugs was not a high priority.

I'm sure that sounds backwards to current Microsoft programmers because projects aren't run that way anymore; there were too many problems with that approach, and the worst was that it was impossible to predict when you would finish the product. How do you estimate the time it takes to fix 1742 bugs? And of course, there aren't just 1742 bugs to fix - programmers will introduce new bugs as they fix old ones. And (closely related) fixing one bug can expose other, latent, bugs that the testing group was unable to find because the first bug was getting in the way.

And those weren't the only problems.

By finishing the features before fixing the bugs, the developers made the product look like it was much further along than it actually was. Important people in the company would use the internal releases, see that they worked except for the occasional bug, and wonder why it was taking Development six months to finish a nearly final product. They wouldn't see out-of-memory bugs or the bugs in features they never tried. They just knew that the code was "feature complete" and that it basically appeared to work.

Fixing bugs for months on end didn't do much for morale either. Programmers like to program, not to fix bugs, but at the end of every project they would spends months doing nothing but fixing bugs, often under much pressure because it was obvious to everybody outside Development that the product was nearly finished. Why couldn't it be ready in time for COMDEX, MacWorld Expo, or the local computer club meeting?

What a mess.

Then a run of buggy products, starting with Macintosh Excel 1.03 and ending with the cancellation - because of a runaway bug list - of an unannounced Windows product, forced Microsoft to take a hard look at the way it developed products. The findings were not too surprising:

  • You don't save time by fixing bugs late in the product cycle. In fact, you lose time because it's often harder to fix bugs in code you wrote a year ago than in code you wrote days ago.

  • Fixing bugs "as you go" provides damage control because the earlier you learn of your mistakes, the less likely you are to re peat those mistakes.

  • Bugs are a form of negative feedback that keep fast but sloppy programmers in check. If you don't allow programmers to work on new features until they have fixed all their bugs, you prevent sloppy programmers from spreading half-implemented features throughout the product-they're too busy fixing bugs. If you allow programmers to ignore their bugs, you lose that regulation.

  • By keeping the bug count near zero, you have a much easier time predicting when you'll finish the product. Instead of trying to guess how long it will take to finish 32 features and 1742 bug-fixes, you just have to guess how long it will take to finish the 32 features. Even better, you're often in a position to drop the unfinished features and ship what you have.

None of these points is uniquely suited to Microsoft development; they are general points that apply to any software development. If you are not already fixing bugs as you find them, let Microsoft's negative experience be a lesson to you. You can learn through your own hard experience, or you can learn from the costly mistakes of others.

Don't fix bugs later; fix them now.
...

Read More

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >