Article
FDA SaMD Guidelines: Navigating Compliance
This post was previously on the Pathfinder Software site. Pathfinder Software changed its name to Orthogonal in 2016. Read more.
I’m adding a new feature to my Rails app. In the finest tradition of Test-Driven Development, I start with a test. Something like this:
should "correctly associate a new address with the current user" do
login_as :john_q_public
put :update, :address => {:street => "123 Sesame", :zip => 00001}
assert_response :success
assert_equal ("123 Sesame", users(:john_q_public).address.street)
assert_equal ("NY", users(:john_q_public).address.state)
end
Let’s assume, for the sake of bloggy argument, that the last line of this test indicates that the model is supposed to be able to infer the state from the zip code. And so, in fulfilling my test, I wander off into the model, write some kind of city_state_from_zip
method, and eventually my test passes. And since the model code is called from this test, my test coverage is 100%. Yay?
The problem here is that I haven’t really tested the model, I’ve only tested the controller, and the controller sort of incidentally touches the model. I have about two digressions about this before I get to the main point:
city_state_from_zip
is called with the correct values, but not testing the result. I don’t normally like doing all those mocks, but I certainly see the point here.The main point for today is the coverage test that incorrectly tells you that everything is just fine when, in fact, you haven’t tested the model at all. In the past, I’ve run two coverage tests, one with just the unit tests against just the models, and a second with all tests against all code. That’s an improvement, but it still can overestimate useful coverage (for example, in a case where two models depend on each other).
So I went back to the drawing board and wrote a rake task that you can download as a Rails plugin. The plugin does a “coverage walk through” — it goes through the app/controllers, app/helpers, and app/views directories. For each file, it looks for the associated test file and runs a coverage test for that one test file against only that one application file. It puts each of the resulting files in RAILS_ROOT/walk_through, and also parses them to create an overall results file in RAILS_ROOT/walk_through/walk_through.html.
This should give a more accurate reading of the actual state of your coverage. You can still get a bad reading if you write bad tests, but at least incidental coverage won’t get included in your final numbers.
This is still a little rough, but I wanted people to try it out. A couple of release notes.
That’s my walk through coverage test. Hope you find it useful.
Related Posts
Article
FDA SaMD Guidelines: Navigating Compliance
Article
The Future of MedTech: Insights from Industry Leaders
Article
Meet Melissa Gill: Driving Innovation and Patient-Centered Solutions at Orthogonal
Article
Innovating MedTech: Larkin Lowrey’s Vision as Orthogonal CTO