Article

Walk-Through Test Coverage

Adam Carnagey

This post was previously on the Pathfinder Software site. Pathfinder Software changed its name to Orthogonal in 2016. Read more.

This week, I wrote up a little Rake task to improve coverage reporting by doing what I’m calling “walk-through” coverage testing. To explain what I mean, let me give an example.

I’m adding a new feature to my Rails app. In the finest tradition of Test-Driven Development, I start with a test. Something like this:

should "correctly associate a new address with the current user" do
  login_as :john_q_public
  put :update, :address => {:street => "123 Sesame", :zip => 00001}
  assert_response :success
  assert_equal ("123 Sesame", users(:john_q_public).address.street)
  assert_equal ("NY", users(:john_q_public).address.state)
end

Let’s assume, for the sake of bloggy argument, that the last line of this test indicates that the model is supposed to be able to infer the state from the zip code. And so, in fulfilling my test, I wander off into the model, write some kind of city_state_from_zip method, and eventually my test passes. And since the model code is called from this test, my test coverage is 100%. Yay?
The problem here is that I haven’t really tested the model, I’ve only tested the controller, and the controller sort of incidentally touches the model. I have about two digressions about this before I get to the main point:

  1. Part of the issue here is that my goal is not to test for the correctness of the model in the controller, I’m just validating that the model method is invoked. If you do a lot of mock object testing, then you’d do a mock directive validating that city_state_from_zip is called with the correct values, but not testing the result. I don’t normally like doing all those mocks, but I certainly see the point here.
  2. This is a specific case of a general TDD issue, namely whether you should write tests against any lower-level methods that are created along the way to passing your initial test. In general, I think that’s only necessary when a) the lower-level methods are in a different class or b) the lower-level method has a lot of complexity on its own, in which case it’ll probably need its own tests for full coverage. Both issues probably apply here.

The main point for today is the coverage test that incorrectly tells you that everything is just fine when, in fact, you haven’t tested the model at all. In the past, I’ve run two coverage tests, one with just the unit tests against just the models, and a second with all tests against all code. That’s an improvement, but it still can overestimate useful coverage (for example, in a case where two models depend on each other).

So I went back to the drawing board and wrote a rake task that you can download as a Rails plugin. The plugin does a “coverage walk through” — it goes through the app/controllers, app/helpers, and app/views directories. For each file, it looks for the associated test file and runs a coverage test for that one test file against only that one application file. It puts each of the resulting files in RAILS_ROOT/walk_through, and also parses them to create an overall results file in RAILS_ROOT/walk_through/walk_through.html.

This should give a more accurate reading of the actual state of your coverage. You can still get a bad reading if you write bad tests, but at least incidental coverage won’t get included in your final numbers.

This is still a little rough, but I wanted people to try it out. A couple of release notes.

  • The task uses Hpricot to create the aggregated result file.
  • The task doesn’t handle RSpec yet, though that would be a helpful extension.
  • Each file expects a test file “#{filename}_test.rb”, but the task doesn’t care what directory the file lives in as long as it’s under /test. If there’s no test file (common for helpers), or if Rcov crashes (depressingly common), no walk_through file is generated, and the app file just doesn’t show up in the result file. That’s obviously not preferable. (However, even if one Rcov invocation crashes, the rest of the files are still tested)
  • The result file is absurdly minimal and ugly.
  • The task runs multiple Rcov instances — it’s kinda slow

That’s my walk through coverage test. Hope you find it useful.

Related Posts

White Paper

Software as a Medical Device (SaMD): What It Is & Why It Matters

Article

SaMD Cleared by the FDA: The Ultimate Running List

Article

Help Us Build an Authoritative List of SaMD Cleared by the FDA

Article

Testing Strategies for Bluetooth Medical Devices