Equal or different test data for higher test levels
It's just a little issue, but one that pops up in my mind regularly when writing test scripts.
Imagine you have an application which does some car tax calculation, having a REST service which has a method for calculating the tax and an HTML interface with a screen to enter the required input and display the tax to pay. The HTML interface makes uses of the REST service and does nothing more than some input validation, passing the values to the REST interface and then display the tax to pay.
Let's assume you developed several service tests for the REST service, e.g. using SoapUI, some with invalid input (weight=0kg or price=€0) and one with valid input (weight=900kg, price=€25000).
The HTML interface is tested for usability, browser compatibility and other relevant quality attributes, which do not apply for the service test. The system test, e.g. using Protractor, contains some test cases concerning input validation, like an empty field for the weight of the car. You also want to verify if the tax calculation displays the correct amount to pay for valid input. Would you use the same test data as used in the service test, because you have the expected results already available, or would you deliberately choose different test data in order to implicitly test the business logic more thoroughly?
Let's assume two situations:
You know the unit tests concerning the tax calculation have a coverage of about 80% and you know the developers are aware of techniques like equivalence partitioning. A decent base for the test pyramid.
The unit tests concerning the tax calculation have a coverage of just 40% and there is no time and budget to write more unit tests and raise the coverage, unfortunately, a common situation.
In the latter situation, you could, of course, extend the service tests to increase coverage on a higher level, but even then you have to decide if you want to deliberately use different input values for the system test.
Personally, as the "service test" is the end-to-end test and the expensive one, I'd just put in some simple values that I know are good. As this is a single test with fixed parameters, I'd just pick some numbers. I'd likely pick different numbers than in lower level tests if they had fixed values, just for the slight increase in coverage. (Though hopefully the lower level tests could have some properties based testing so these values would matter much less.)
For me, end-to-end tests tend to just be happy path verifications. If there's known regression cases I want to handle, I'd have tests for those too, but if possible, I'd be pushing them down lower in the pyramid (e.g. if I know it's something the API should handle, I'd write an API test, and not an end-to-end test).
The idea of using end-to-end tests to increase coverage that should have been done lower down the test pyramid just screams broken to me. I'd want to add the tests closer to the bottom of the pyramid - building out the breadth at the top of the test pyramid is almost always wrong.