How to find gaps in regression test coverage



  • I've been tasked to identify gaps in the regression test coverage of a large C# based data management system that comprises a SQL database, server application, multiple web clients and multiple windows forms clients as well as a SOAP based web service API.

    There are a large number of manual black-box regression tests that have evolved over the 15+ years this system has been evolving but there are gaps in the coverage. We are in process of automating these test cases with Selenium and White based test frameworks.

    I wonder if anyone can suggest a good analysis technique to identify the gaps in coverage.

    I'm also keen not to duplicate existing white-box integration test coverage of business logic when we come to automate the ~150 manual regression test scripts.



  • Two basic indicators of coverage are:

    1. Code Coverage: The analysis of which parts of the code were exercised by tests. In C#, there many tools, such as dotCover, OpenCover, and NCover. Check out a description of them here.

    2. Mutation Testing: The analysis possible changes in code (introduction of bugs) that do not trigger a failure in a test. In C#, as far as I know, the only mature tool is VisualMutator.

    For the "manual" test cases, you can look at the options for instrumentation for your tech stack, in particular the options for tracing.

    Alternatively, you can create a Requirements Traceability Matrix, and use it as a reference for brainstorm risks of uncovered areas.

    enter image description here



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2