How to Evaluate Code Coverage

Given a code coverage percentage, how do you say whether it's “adequate” or not?

I've heard people throw around numbers like 70% or 80% as the minimum to ensure test, and therefore product, quality. Some will even insist on 90%-plus.

These people are looking at code coverage all wrong.

First things first. I've used and studied code coverage for a long time. I literally got my PhD working on code coverage. So please believe me when I say that NO amount of code coverage will guarantee that your system is well-tested.

I've seen executive sponsored programs that force teams to drop everything to push for some arbitrary code coverage number. Those initiatives are horribly misguided. The product teams know it, and they'll game the system by writing the easiest rather than the most valuable tests.

As a manager or executive, the best you'll get from code coverage is the knowledge that your engineers are at least thinking about testing at some level.

So how can we quantify that?

Instead of reporting the actual code coverage percentage, report on its velocity instead.

Say you achieve some baseline code coverage number. Maybe it's 40%. That's ok. Just make sure that going forward, any future changes pushed out by engineers maintain at least that number. With that approach, your code coverage will probably go up over time. Or it may not, but that's ok provided you don't let it go down.

With such a strategy, you're ensuring that new tests are focused on the parts of the system that are actually changing and therefore at the highest risk of regression. Your team will be happier and your product quality will indeed improve.

As you probably guessed, I enjoy talking code coverage and helping people use it better. Reach out if you want to continue the discussion.

Previous
Previous

How to Interview a Ph.D. for a Software Engineering Job

Next
Next

Why Your Burndown Chart is Always Off-Track