Dynamic whitebox testing and coverage metrics for Java systems?
morde last edited by
What coverage measures/metrics are useful for unit testing and for integration testing of Java or other object-oriented systems? Context: We would like to establish a better test process. Our software systems are (mostly) developed and maintained by external providers (including implementation of change requests). As of now, they do the development and (hopefully) some testing and provide us with the resulting software. Then we do the system testing. Now we would like to make some specifications of measures which have to be fulfilled by the provider in order for us to accept the software and start system testing. There are some techniques like statement testing and decision/branch testing which for which some tools exist (e.g. Cobertura, EclEmma), and which seem to be relatively easy to use. These could of course be somewhat helpful. Unfortunately they seem to be insufficient for object oriented systems, since the complexity of object-oriented systems mostly is due to the relations between classes and objects, not due to complex control flows. Required statement coverage (e.g. 100%) or branch coverage could be achieved relatively easy. Nevertheless, there could be many quality problems which cannot be identified by these tests. Are there other techniques and corresponding coverage measures which are more adequate? If yes, can you recommend tools? Update: After thinking about it, and without having found better measures/metrics for object-oriented systems, I contemplate the following approach: Require statement coverage of 100%, maybe with justified exceptions of some classes or (groups of) methods. Maybe require decision/branch coverage of 100% for all classes/methods which are not excluded (see previous list item), depending on the project. Additionally to these measures/metrics, and prior to development, perform design reviews to get confidence in the approach the provider chose.
I have tested Java enterprise systems for about 10 years, both web and desktop client. Each of the systems I tested had a different requirements model, but the evaluation of coverage metrics (as it applies to answering if the product is ready to ship) was always a major concern. The most effective coverage evaluation in our experience was a two-pronged approach with thorough unit requirement testing (such as in TDD) combined with exploratory testing of the user stories at the system or integration level. That requires heavy communication among the team when deriving the unit-level requirements, both initially and as the requirements "evolve". Your description of the relationship with the development contractors is a bit concerning and is probably the weakest point of your process. I would suggest trying to incorporate more frequent communication and interim deliveries of the component software from the development teams to better evaluate the implementation at the system level as early as possible. The combination of unit requirement coverage and exploratory evaluation of the user stories at the system level should give you the best coverage indication.