Good tests obviate debugging

This should not be controversial. It's effectively a tautology.

Good tests make debugging unnecessary.

This should not be a controversial statement. Yet it seems it is. If you don’t believe me, all it takes is pasting this statement on your favorite social media platform, then grabbing some popcorn.

There are many ways to think about the quality of a test, but perhaps the most relevant charactaristic of a good test, for this discussion, is that it’s independent or isolated (the I in FIRST).

This is often thought of from the point of view of the tests. That is to say, we don’t want test 2 to depend on the result of test 1. We see this pattern violated whenever we have a string of, say, CRUD tests: 1. Create a user. 2. Fetch that user. 3. Delete that user.

Much better is three tests, that are independent of each other, and can be run in any order. Of course this implies a bit of duplication. The Create a user test must also delete that user (assuming it’s using a persistent data store), and the Delete a user test needs to create one somehow, before executing the test.

And this has far-reaching consequences. For one thing, it allows us to run the tests in parallel, or out of order, or only run a subset of tests.

But here’s the interesting bit: If we write our tests like this, we also never need a debugger.

With the all-in-one test, a failure in “Delete a user” could be caused by the creation, the fetching, or the deletion operation. How do we know which? We debug.

With isolated tests (and an assumption of complete test coverage, of course), if the Delete test fails, but the Create and Fetch tests succeed, we know that the Delete method is the only one with a problem.

“Okay,” I can hear someone saying. “So I know the error is in the Delete method, but that doesn’t mean I don’t need a debugger for that Delete method!”

Well, okay. If your Delete method is particularly complex, maybe you need a debugger. Or maybe you can refactor your Delete method to be less complex. Or at least to have more granular tests.

Whenever I find myself reaching for the debugger (or the gratuitous printf), I take this as a sign that my code needs better tests, and possibly a better architecture. And this isn’t just some accademic exercise in ridding the world of interactive debuggers. It’s a practical observation that my code, as written, is difficult to understand and test.

Code that’s easy to understand and test never needs debuggers. This should not be controversial. It’s effectively a tautology.

Share this