Resources on implementing a well-balanced test pyramid
Demir last edited by user
The test pyramid is a concept developed by Mike Cohn, described in his book "Succeeding with Agile". Basically, end-to-end tests (often called UI tests) go across all the layers of the application, while unit tests cover single components.
While the idea is sound and logical, I found implementing a well-balanced test pyramid hard, particularly when different teams are responsible for different levels of tests. For instance, in our teams devs focus on unit tests, while testers focus on end-to-end tests. Service-level tests are done either by testers, devs or by both, depending on the knowledge of architecture, testing skills, available time, etc.
Therefore, I'm looking for resources (books, blogs) that could help me answer:
How do you make sure you do not duplicate test cases at different levels? Sure, when you verify how the system handles invalid input from a user, you may discover different problems at unit level, service level and UI level. Does it mean you want to replicate all unit tests at higher level as well? I would like to avoid that, as end-to-end tests takes longer and thus their number should be limited.
Following the above question, how do you recognize redundant test cases?
When a tester discover a case on end-to-end level (UI) that could be easily tested at unit level (and is missing in unit test suite), should she add it to the unit test suite?
I understand the area is wide, hence I'm more interested in hands-on experiences and lessons learned than silver bullet solutions.
I've been wondering the same thing for the last couple of years. In my organization we've begun using BDD style testing to help with our pyramid problems. Right now we have about 800 automated acceptance tests, 3000 manual tests, 1500 integration/service tests, and 1000 unit tests. Obviously not ideal.
I've realized that a lot of those manual tests, automated or not, can be tested more efficiently through integration or even unit tests, but our QA analysts have no idea what those lower level tests are doing. Using BDD to describe the behavior that is being tested is helping QA analysts, SDETs, developers, etc all get on the same page as to what is being tested. In most cases it doesn't really matter to anybody what the implementation of a specific test is, just that some behavior is functioning as intended.
As for your specific questions, I think this helps us avoid #1 and #2 before it's an issue. Regarding test duplication, certain things like input validation are requirements at multiple levels (service, web) and we prefer to validate those at both levels with unit tests if possible.
For #3 when a tester discovers a defect, he can write a BDD test for it which will be implemented at the correct level as determined by a developer or SDET (see the Depth of Test link below).
Here are a couple of things I've read about this topic: