Complete code coverage
Yesterday I said that code coverage percentages are overrated. But there’s something in the concept of “code coverage” that gets at something valuable.
Often when asked what percentage of code coverage I aim for, I respond with “Complete code coverage.” Which is not the same as 100%. If I were forced to define my concept of complete code coverage, it would probably go something like this:
Every meaningful input condition is tested, under every meaningful state.
This is a bit of a weasly definition, though, because it uses the term “meaningful” (twice, no less!), which is subjective. But I guess that’s the point.
What might be more illustrative is the process by which I write my tests, which I consider to provide “complete coverage.”
In a word, I do TDD. That is, I write my tests as I’m writing my code. And I reason about my code as I go along. I also usually write my failure tests before my “happy path” tests, which really helps to ensure that every meaningful input are tested, under every meaningful state.
When I discover a new bug in existing software, I add a new (failing) test, then I fix the bug. Then I spend some more time reasoning about the code. If this failure case got through the existing test scenarios, there’s likely some other meaningful input, or meaningful state, that wasn’t considered. I spend some time reasoning about it. But I don’t go crazy. And I don’t trust some automatic tool to tell me if my tests are meaningful.