Published on May 13, 2009
This post was previously on the Pathfinder Software site. Pathfinder Software changed its name to Orthogonal in 2016. Read more.
I recently got into a debate with a coworker about my requirement on an internal project that the code coverage cannot fall below 100%, or the build will break. He put up some very good points, but I’d like to spell out my thinking.
The project is an internal project we use for staffing, time entry and billing, so it is a production system, that is used for billing our clients, so I treat it as a very high priority. Unfortunately, it’s an internal project, so I work on it when I can, and I get bench resources when they are free. This causes many developers to come on and off the project with no predictable pattern. I kept encountering this experience:
A bug is found. I test it locally, and sure enough, it’s true, something’s not right. I look at the code and see the problem, let’s say in a helper. I determine the fix and know 3 new tests that will assert the bug conditions. So I open the test for the helper, BUT THERE IS NO TEST! I swear in many languages. Then I run the test coverage. It’s 100%. I swear in several other languages. I look at the functional test and see that it merely gets the page, and asserts a few unrelated things to the helper code.
So, as I am very upset, I add a 100% coverage requirement to the project. Why? Well, I am trying to force developers who come on and off the project to write tests first. I can’t understand why any of my developers would ever touch a model or helper without writing their tests first. It’s so easy and makes the code so much better. I will admit that when I write controller or view code, I generally will write the code, and then the tests. Actually, I usually write a bunch of tests that assert the response and template. Then I code the view, and then come back and add a lot of assert_select to test the view, as it’s harder to know what to assert.
So I guess I am pissed on many levels. Partly, because code coverage can really give you a false sense of security. Writing a functional test that gets the index of a given controller, will mark most of the helper and controller code as covered, even if there are no assertions. There is no good way of testing your code base against “how good are my tests.”
In any event, I think that every project should have a code coverage report, and it should be part of the build. I also think that the build should fail if the coverage falls below a certain percent. This number should be agreed upon by the team. For some projects, the technologies at play make it really hard to get to 100% coverage. Sometimes the coverage tools have bugs that report erroneous holes in coverage. Regardless, coverage should be reviewed as a team periodically (a kick off of the iteration retrospective is a good place). Holes should be fixed as soon as they are uncovered.
Code coverage does not give a good indication of how good your tests are. Generally, if you have good developers, your tests will be excellent. Code coverage merely indicates that there is no automated test on part of your code, and you should address it.