Measure the right coverage

Arjan Molenaar

I’ve found many people to care for a high unit test coverage. It tells you something about how well your code is tested. Or does it?

Unit tests typically test the smallest piece of code. It is an excellent strategy to write your tests in conjunction with the production code. The tests help you shape the interfaces and help explore the problem domain.

Big question is: does the business/product owner care? What do those tests tell him (or her) about the actual functionality delivered? Fairly little really, if any at all. This boils down to the next question: why care about unit test coverage then?

The obvious thing to do here is to test the coverage of your acceptance tests[1]. Acceptance tests define a requirement from a business point of view, i.e. they harness the real requirement for the product.
Agile software development is about creating business value. The acceptance tests represent the business value created.

Having a coverage figure on those tests tells you one or two things. It’s the proof of the pudding. If your coverage figure is fairly low, there are areas of the application not covered by any acceptance tests. Does this mean there are no requirements related to this code? Or is it left uncovered for a reason?

Do I advocate to abandon unit tests all together? Most definitely not. As I stated at the beginning of this post, unit tests make an excellent vehicle for designing your code. Just do not focus on unit test coverage: use unit tests to construct and test the more difficult parts of the code. After the code is constructed and the acceptance tests turn green, figure out which unit tests have value (e.g. for future development or maintainance) and get rid of the rest.

You’ll need to maintain our unit tests, just like you do with the “production” code. Figure out which tests are important and which are not. Tests that are close to the code and structure are possibly not the best tests to keep around. They’ll need to be changed with every code modification or refactoring.

Just keep in mind we’re creating business functionality (value) here. Care about that. Measure that. Leave the details for what they are and measure what your business cares about: functionality.

Acceptance tests are your stable entry point in making sure modifications do not break the system.


  1. Mind the acceptance tests might be written/executed using a unit test framework, although the readability for the stakeholders should not be cluttered by technical details.  ↩

Comments (10)

  1. Remon Sinnema - Reply

    November 30, 2012 at 12:27 am

    I strongly disagree.

    The testing pyramid tells us there should only be enough acceptance tests to make sure our units integrate well, since they are slow. The majority of the testing should be done at the unit level. It makes no sense to measure coverage of something that is *meant to be* incomplete.

    I elaborate on measuring code coverage on my blog:
    http://securesoftwaredev.com/2012/10/15/on-measuring-code-coverage/

    • Arjan Molenaar - Reply

      November 30, 2012 at 1:40 pm

      Hi Remon,

      I've read your post, nice post btw., but it poses a different vision than what I have encountered.

      I think you're mixing up two things here: fast feedback and accurate coverage. For fast feedback unit tests are ment and you should definately use them. Coverage comes next. I like to validate if the code matches the requirements. Dead code should be removed, since it causes maintainance issues.

      Coverage, security scans and quality metrics are in the same group of metrics and should be handled as such.
      Acceptance tests are slower than unit tests. That's not a problem for coverage. I'm not interested in my code coverage number when I'm coding. I want to know if I'm doing the right thing given the feature that needs to be implemented.

      There is a nice paradox here, since acceptance tests are (or at least should be) in line with business requirements, whereas unit tests are written from a technical (code) standpoint. What do you want to achive with your coverage number? Does a high coverage number on unit tests guarantee me a flawless product?
      No. The acceptance tests do. Having an incomplete acceptance test suite sounds like a bad idea to me. Where are you getting your requirements from in the first place?

      I'm not saying you should forget about unit tests. They are a great aid to validate a unit, taking into account the assumptions the programmer made (e.g. in his mock objects). You should definitely have a bunch of them around. Probably measure coverage to verify your testing the right stuff. However, it's not as important as many people think: you want to deliver functionality, not just code. Right?

      In my opinion you want to make the code as closely tied to the features they implement. You'll need to verify that.

  2. Remon Sinnema - Reply

    November 30, 2012 at 4:51 pm

    "Dead code should be removed, since it causes maintainance issues."

    I'm not going to argue with the maintenance issues. But why would you first write dead code and then remove it? Why not write only the code you need? That's actually possible with BDD:
    http://securesoftwaredev.com/2012/07/02/behavior-driven-development-bdd-with-jbehave-gradle-and-jenkins/

    However, you should not test everything with slow acceptance tests. Yes, every story needs at least a happy path BDD test, so that the customer/PO can see progress and remain confident that we're on the right track. But there is no point in testing all the edge, error, and abuse cases in BDD tests as well. These things are both easier tested in unit tests, and quicker to run. So again, if the acceptance tests are incomplete on purpose, why would you measure their coverage?

    "What do you want to achive with your coverage number?"

    The goal is to show where we're missing tests. Note that in a perfect BDD/TDD world, we wouldn't even need to measure code coverage at all, since it would always be indistinguishable from 100% (the exceptions being programming language hurdles like private constructors and catches for checked exceptions in Java).

    • Arjan Molenaar - Reply

      December 1, 2012 at 10:02 am

      BDD tests are not unit tests. Although they are often executed through unit test frameworks.

      BDD tests live more on the level of component tests. BDD tests describe behaviour on a functional level, like acceptance tests do. Using BDD over acceptance tests might actually be preferred, depending on the context.

  3. Wouter Scherphof - Reply

    December 1, 2012 at 12:51 am

    Agree. Unit tests have no business value, while they do come at a business cost. In Lean terms they are waste and should not be created. Thing is, developers feel they need them, since when applying text book OO, the business code is so scattered around many tiny methods, they can't see the forest for the trees. And when doing text book polymorphism, they can't even tell which class's code will actually execute at run time. Because of this mess, developers feel the need for unit tests. But they only augment the pile of code that's created, even more so when also testing mock objects, which is considered best practice, while just fighting a symptom, instead of fixing the problem.
    Acceptance tests should not just cover the happy day scenario. The happy day scenario should be one product backlog item. Each deviation from the happy day scenario should also be one product backlog item. This is how you clearly specify requirements and how you clearly prioritise functionality. Every product backlog item should have an acceptance test. Of course.

    • Remon Sinnema - Reply

      December 1, 2012 at 6:04 pm

      "Unit tests have no business value, while they do come at a business cost. In Lean terms they are waste and should not be created."

      I agree that the customer doesn't pay for unit tests, and therefore they can be considered waste. That is true for all types of tests, however, not just unit tests.

      But the customer *does* pay for *working* software. So how do we know that the software we're about to ship to the customer works as intended? There are basically two approaches: formal proofs and tests. Formal proofs are not practical for most purposes, so we're stuck with tests.

      So, yes, tests are waste, but they are a *necessary* waste. Necessary waste is the best way we currently know how to do a job even if it does not have customer value. You should not get rid of it, but always be on the lookout for a better way to do things.

      So the question then becomes, what kinds of tests are most useful and least wasteful to give us confidence that the software we write actually works? Martin Fowler has a great writeup on that subject:
      http://martinfowler.com/bliki/TestPyramid.html

      • Arjan Molenaar - Reply

        December 1, 2012 at 8:17 pm

        Unit tests have a value: in the software creation process. You should re-evaluate the tests at the point where the code has been written: what's useful to keep around and which ones aren't. Big question remains: what does a coverage figure mean in this context.

  4. Nico - Reply

    December 1, 2012 at 2:32 pm

    Good post, I often have to maintain codebases with an extraordinary amount of unit tests, very granular, and the following statements certainly rings true:
    => " Tests that are close to the code and structure are possibly not the best tests to keep around. They’ll need to be changed with every code modification or refactoring."

    Often the only thing a test does is mock all objects that interact within a method, and then just verify that all interactions were called. Those tests do not provide any value, and constantly break when making minor changes.

    • Remon Sinnema - Reply

      December 1, 2012 at 6:15 pm

      It takes skill to write good unit tests. That does not mean that the concept holds no value, however. If that were the case, then we shouldn't write any software at all anymore, since I've seen my share of bad code ;-)

      Instead of getting rid of unit tests, observe what sucks about your current suite, and keep making improvements until the pain goes away and you actually start seeing benefits.

      And practice! Too many developers only write code/tests at their day job. You need to practice in a setting where there's nothing at stake, so you can fail safely and learn from your failures. Take a cue from the sports and music worlds:
      http://projects.ict.usc.edu/itw/gel/EricssonDeliberatePracticePR93.pdf

      BTW, the Global Day of Coderetreat is an excellent opportunity to learn with other developers:
      http://coderetreat.org/events/event/search?q=netherlands

  5. Wouter Scherphof - Reply

    December 2, 2012 at 5:30 pm

    Unit tests don't contribute to the accrue of business value, only to the accrue of the amount of code created to deliver business value. And code comes at a cost. Both the direct cost of writing it and the recurring cost of maintaining it. If a unit test fails, what does that tell you, what part of the business is impacted by that failure? You just can't tell. If you can do without unit tests, don't create them.
    Acceptance test are directly related to business requirements. If an acceptance test fails, we know exactly what the business impact is. Without acceptance tests, we can't tell whether requirements are met. Acceptance tests have real business value. Our customers pay for them. Willingly.

Add a Comment