Code coverage and testing

Recently I mentioned that we were in the process of adding additional tests to our code base. We’d been using JITT to reduce the number of tests that we wrote and now it was time to fill in some of the gaps. This week I started to use some code coverage tools to see how well we were doing with our tests…

My client has both Compuware DevPartner Studio, which includes a coverage tool called True Coverage, and Rational Pure Coverage. I used both this week and I find that Pure Coverage product seems to work better for me.

Both of these products instrument the code under test, collect data whilst the program is running and then produce reports that show which pieces of code were executed. I found that the filtering available in Pure Coverage made the product more useful to me during testing as my tests were focused on a particular library and I was only really interested in the coverage numbers for that library. Pure Coverage allowed me to hide all of the dependent libraries and test code and mock code from the reports.

The coverage data was useful; it clearly highlighted the code where we’d only written the first test and it also showed us where we had failed to add tests that executed various failure paths. Many of the classes that had pretty complete coverage had many of the exceptional code paths untouched; pass in an incorrect key to a factory object and it throws an exception, you need to remember to test for that. Often the untouched error situations were hard to get to; validation of input at one level in the code made it impossible for duplicated validation logic to fail lower in the code, but sometimes it’s hard to see that…

The coverage numbers that you gather during testing don’t mean a great deal, simply that certain code paths have been executed. They don’t in any way represent anything about the quality of your testing, just that the code has been run. Ideally you get 100% coverage from the tests you create whilst thinking of ways to test the code. Running the coverage tool just tells you that your work is done.

Our coverage data indicated where we needed to add a few extra tests and helped us to refactor some unreachable code. It also helped to keep us writing new tests by reminding us of places that hadn’t been tested. However, you need to be careful with coverage tools. As Brian Marick states in “How to misuse Code Coverage” these tools can and are used incorrectly. It’s easy to be lulled into the idea that you can let the coverage tool drive your testing when, in fact, you should let your testing drive your testing and simply use the coverage tool as a quick verification mechanism. What we did was use the coverage reports to highlight two situations: 1) functions with no coverage, i.e. code that wasn’t being tested at all, and 2) functions with between 95% and 100% coverage; i.e. code that probably had a few missed error states. Run the report, pick a few things to fix, work for the rest of the day and then run the report again. Using the coverage tool in this way seemed to work well for us; it added focus and provided verification, but we didn’t become slave to it.