This post was previously on the Pathfinder Software site. Pathfinder Software changed its name to Orthogonal in 2016. Read more.
All software has bugs. I don’t care if you’re Apple, Microsoft, IBM, or a smaller, leaner ISV. Your software has bugs in it. Once you accept this fact, that into each software product a little crap must fall, it becomes clear that what differentiates one software development organization from another is how they manage those bugs. What do you do to prevent them in the first place, find them, fix them, measure them, celebrate their squashing?
Most of the developers at Pathfinder Development are old hands. We’ve been around the block a few times. We’ve worked on quite a number of software projects over the years. We’ve all gravitated toward agile development not because it’s the latest buzzword, or because it feels so good (though it does when it’s done right), but because everything else feels so bad.
Take testing, for example. Testing girds you, the product manager, for the moment of truth — deploying or shipping a new release. In the monolithic or waterfall processes, we would often wait until the end, after all of the development was done to “QA” the application. Often you’d throw in some performance testing to see if this monstrosity you’d just built would actually handle the load you were going to throw at it.
After weeks or months of testing, fixing, hoping, praying and integrating, you’d finally deploy. Odds are you’d ship with some pretty serious bugs. But the quality of the software, in terms of bugs and errors, was pretty poor and the expense and effort of the developers fixing the bugs were akin to the contractor applying spackle to a drywall that was more hole than whole.
Let me go through some of the testings we did today, what the previous alternative was, and what the benefit for your product is.
Continuous Integration Rocks
Every time you check in some code, the entire system builds, is tested and deployed to a development environment, then tested some more. This is the heartbeat, the core of the Agile feedback loop. The alternative was building part of the application on your local environment, occasionally checking out code from other developers, and doing a painful build and integration at the time of deployment that could take weeks or months to work out the kinks. You would not discover basic compile time problems for days or even weeks.
For the product manager, the equation is simple: bugs are more expensive to fix the farther they get from the time of original development. Automate your tests and you find things out right away.
TDD and Unit Testing Rocks
We work in OO languages for the most part, so Unit tests are a natural fit for the classes we develop. We practice test driven development — writing the tests first — since we know that writing tests afterward is like making tea with a used tea bag. We strive for 100% line and branch coverage. Every time we identify a bug, we write a test for it.
Agile testing is cumulative, each iteration produces automated tests that protect your code from unintended side effects in the next iteration. Since most software has more development effort expended on it after it has been released than before, the cumulative tests act as an early warning system against those side-effects. I think of all those times that a change broke a test and therefore broke the build during an iteration. In the old way of doing things that were a bug that no one would have noticed and that likely would never have been detected or fixed.
For the product manager, automated, cumulative tests mean you can quickly and confidently modify your product to meet changing market demands without having to worry about and brittle and buggy code base.
Releasing Working Code Every 2 Weeks Rocks
In the old days, we would wait until close to the end of the project or phase before bringing all of the code together in a working, useful system. That pretty much made testing something you could only do at the end. Now we release a system every 2 weeks that is useful in the business sense and can be tested and critiqued. Can this be done? Well, if you are designing an air traffic control system and you need all of the features at once — you want both the take off safely and land safely user stories at the same time — then Agile may not be for you. But for the other 99.9% of software applications, releasing every 2 weeks is very much doable.
For the product manager, this means you can test and measure whether your application performs up to spec from the beginning and as the application grows and evolves. You can do usability testing throughout. You can do security testing throughout. Knowing whether your application so far meets the spec allows you to predict when you will be done with development to a much greater precision as you approach the end of development.
Releasing Working Code Every 2 Weeks Rocks Again
Traceability. How I love that word. In the old days, keeping track of whether you tested something was a major headache. On a project with 12 developers and several months of development you sometimes lost track whether something had been tested, even with spreadsheets and project plans. Today we automate and never bite off more than we can chew in 2 weeks. That means the tester and the developer work or even pair together to ensure that our little collection of features has all been tested.
For the product manager, this means less headache verifying for compliance or auditing that everything that the team said it had tested has in fact been tested. More peace of mind. Fewer ulcers for the product manager.
Is Sliced Bread Really so Wonderful?
For those practicing Agile already, the benefits listed above may seem rather short (there are many other benefits beyond testing) and mundane. After all, this is all stuff you take for granted. But for those product managers still wallowing in big, monolithic processes or halfheartedly adopting aspects of Agile: jump in, the water’s fine. You’ll notice when the pain stops and then, hopefully like childbirth, you’ll forget all about it.
I’ve only scratched the surface here on Agile testing. To my readers: any notes or observations on testing you’d like to share?