Measuring code coverage in end-to-end tests?
I've been hearing about ideas for measuring code coverage in end-to-end tests (e.g., Selenium tests) and I really don't see a reason for that, even if this is technically feasible. There's a concept of test pyramid that says that on the unit level you should have the highest number of tests, on integration level a medium number, and on end-to-end (Selenium) the smaller number of tests, because they are the hardest to maintain, the longest to execute and the least stable (because SUT is not isolated). So there's little change you will cover all the code in end-to-end tests. Also, there might be areas in the lower layers of the code, which has not been exposed through, say, API so far (e.g., because the feature will be accessible in the next release), and can be tested only with unit tests. I think the idea of Selenium or end-to-end test is to verify different business scenarios, so it is more about function/feature coverage. Does measuring code coverage in end-to-end tests gives any value in addition to code coverage for unit tests?
Bogopo last edited by
I always like to get code coverage for my functional tests, but not because I want to hit a certain percentage of code coverage. I like it because: It points me to areas of the code that are not covered. There are areas of the code that are very difficult to unit/integration test without having the entire system in place and doing end to end tests, so I like to compare the coverage from unit/integration tests to the coverage from my functional tests and see if there are things that should or could be covered in the end to end tests that may be more difficult in the earlier stages. I want to know which of my tests are equivalent, so I can look at what is covered by each test and see if there are tests I can eliminate or consolidate to be more efficient.