2 Comments

Incidental Code Coverage: TDD as a Solution

Some of my previous work experience involved the validation and verification (testing) of aerospace software. One of the rules for this testing was that test cases needed to structurally cover a large portion of the software under test, but this coverage was not allowed to be incidental code coverage. Incidental coverage occurs when a test case causes execution of specific lines but does not validate the inputs and outputs affected by those lines. Code coverage is usually used to determine which portions of a project are currently tested, but incidental coverage is a false positive. It causes the coverage tool to tell us that lines are tested when they may not be. For example, 100% of our code may be executed by our test suite, but some of the code may be a routine without its own set of tests covering a variety of inputs. If you’d like to read more on the concept of incidental coverage, start by reading over Jason Rudolph’s discussion of incidental coverage; it has lots of good examples in Ruby.

When I started using TDD practices in embedded software development, I was pleasantly surprised to discover that a lot of the incidental coverage problems vanish if a fine-grained TDD discipline was used. Since then, I’ve tried to justify each new decision point in my software with a failing test. This means that each new if, else, for, while, or switch case gets its own failing test before it’s written. This approach clearly demonstrates that a decision point has proper, non-incidental, coverage from at least one test. (If you’re new to TDD, don’t forget that you may need multiple tests to fully exercise the new decision point.)

Another concept that has helped reduce the occurrence of incidental coverage in my code is the use of mocking with CMock. Mocking allows us to test different modules in isolation. This writeup on CMock from Throw The Switch explains the practice of mocking in C rather well:

For each test, one (or more rarely multiple) real module is linked with mock versions of any other modules that it interacts with. These are tested against a test file. Then, another module is tested, bringing in the mocks of files that IT interacts with, and using the real version of it. This continues until all real modules have been tested, and very likely the mock versions of many of them have been used to help.

If you’re fortunate enough to have had the opportunity to use TDD practices in embedded systems, but still find incidental coverage in your test suite, then I’d suggest using a more fine-grained approach and introducing tools like CMock to your tool chain. If your embedded organization is struggling to maintain software quality and is not using TDD practices and tools, perhaps introducing them could help.

At AtomicEmbedded we use Bullseye Coverage, a commercial tool, for C/C++ code coverage analysis and Unity for unit testing C. Ceedling keeps these tools and CMock company in our embedded TDD toolbox.

I’d be interested to hear what others think about this. What other methods and tools can be used to avoid incidental code coverage in a test suite?