In an earlier post about the benefits of Agile and Scrum, I made a statement that bugs by their nature are not the same as normal features, and I wanted to take a moment to try and make my point a little clearer. Bugs and estimation have been a hot topic with us lately, but interestingly we are all working on different projects and actually have a slightly different take on the subject.
My definition of a bug is: A feature that was specified, and you attempted to deliver, but is not working according to your intentions. (ie. “I thought it WAS saving to the database”)
Not a bug: A feature or variation that you hadn’t intended to create in the first place. (“Oh, I didn’t know it was supposed to do that when you clicked the back button”)
And with that understanding, I say “Bugs can’t be estimated”
The reason I make the distinction that way is because I think there are a lot of things that should be considered ‘Missed Requirements’ or ‘New Requirements’ that often get classified as bugs, when they are not. Both of those things should be as easy to estimate as any other feature work. I try to keep them separate from bugs because they are not bugs in software, but rather ‘bugs’ in communication. Perhaps the requirements criteria weren’t clear, the acceptance test cases insufficient, or the developer just skimmed past a clearly defined and well-stated requirement. Either way, these things can be estimated, and should, and the estimates and actuals should be accounted for in your normal velocity calculation.
True bugs, however, are things that aren’t working the way you expected them to, and you aren’t sure why, and you most likely aren’t sure how long it’s going to take to fix it. When I’ve seen people give estimates for bug fixes in the past it’s either a complete guess that will be drastically over or under, or the more common is that they spend some untracked amount of time figuring out what the issue is, and then produce their ‘estimate’, but I contend that it’s the ‘figuring out what the issue is’ that needs to be tracked and accounted for, and is the un-estimatable component that can throw your iteration off.
I try to make this point in such black and white terms to defend against some common anit-patterns I see.
1. Missed requirements get called bugs, and the team doesn’t measure the time it actually takes to resolve them or adjust their velocity and relative point scale accordingly. (Imagine someone estimates it will take 2 days to complete feature X, and feature Y looks pretty similar, so they estimate 2 days for it as well. After 2 days feature X is ‘done’, but then there are several missed requirements and ‘bugs’ that take an additional 3 days to fix. It’s critical to make sure that the estimate for feature Y is updated accordingly)
2. New Requirements get called bugs, which most of the time is just an honest misunderstanding, or it could be a more insidious attempt to cover up a communication problem, but when they are called bugs, they are often not treated in the same manner as other requirements (UI design, analysis, acceptance test definition, etc).
On my worst experiences with this was on a project where both the BA and QA resources had significant domain expertise, but disagreed on some implementation details. The QA person was unable to make their case during the requirements review, so they just waited until testing to say that there were several bugs, but in fact, these bugs were variations on functionality that was not originally specified. On that project, only certain people could define and approve new functionality, but any bugs opened by the QA team were assumed to be the top priority and didn’t need to be reviewed or approved. (an extreme example I know, but it just points out the importance of making sure that you treat bugs as bugs, and requirements as requirements)
3. Planning to take care of ‘all outstanding bugs’ in the ‘bug-fix iteration’ as the final iteration at the end of the release. Here’s where I’m really saying “you can’t estimate bugs”, and that it might be very dangerous to set the expectation that you can fix all of the issues in that iteration. What if it takes twice as long? What if fixing the bug causes more bugs?
You can’t be so sure of these things, which is why I say you should plan for them differently.
1. Requirements disguised as bugs should be put in the backlog and prioritized and estimated as normal features (and figure out why they were missed)
2. Allocate a ‘bucket’ of time within each iteration to tackle bugs.
3. The stakeholders prioritize and define the expected behavior for bugs
4. The burndown chart for the bug fixing reflects the amount of time remaining in the bucket, and a number of bugs left
5. When the time is all used up, the product owner has to make a decision for how best to utilize resources, either pull someone off of a feature to continue working on the bug or hold off on the bug until the next iteration.
Now it might seem odd to stop working on a bug, but the reality is you have a fixed amount of time each iteration, and you have to leverage it as effectively as possible. The goal is to have fully testable and hopefully releasable software at the end of the iteration so you can’t leave a feature half-done. One way or the other you have to find a way to get it done. When you don’t track the amount of time spent on bugs, you can end up shortchanging your other features, which in turn can lead to more bugs.
Do you track time spent on bugs? (do you relate it back to the original estimate for that story?)
How do you estimate and plan for how long it will take to fix bugs?
(slightly unrelated) The recent client project I was on had ‘Urgent’ bugs that went back more than 90 days. Who is in charge of cleaning up bugs on your team? (are there any bugs more than 30 days old, and why?)
Bonus question: What would it take for your team to have a ‘zero bug’ policy? If at the end of every iteration there were no outstanding bugs?