How to handle bottle necks in agile testing procedures



  • We have a problem in our development team, the development throughput is much higher than our QA members can test.

    Our test team found itself in a situation where a new feature needs to be tested every two days.

    So we have to do function, regression, exploration, and other testing, this loop takes a lot of time because any change or commit in the source code forces it to repeat everything.

    They are now waiting for the developer to run his tests in the middle and before the end of the sprint. Is there some kind of best practice for such bottlenecks in the agile environment?

    • So one should rather solve the problem from the Project Owner side. Plan fewer tasks in sprint planning?

    • Should tasks be rejected in order to avoid this bottleneck?

    • Should sprint planning be further punished?

    I already described the problem in the retro, the solution was a shortened sprint planning. But, depending on the effort, we still have the problem that we run into this bottleneck over and over again.

    But the problem is, we also write and review the test automation. But we simply can't keep up with passing the "normal" test, adapting the automatic test at the same time or creating a new one, everything together is simply no longer achievable.

    • So should the execution of the test automation also be included in the sprint planning?
    • But wouldn't this even worsen the problem?
    • Where exactly are the priorities to be defined now? Write and adapt the test automation, or just normal test, i.e. Explorative, Functional.
    • What is important Whitebox or Blackbox test?

    Especially small teams with few testers and a lot of tasks have these problems in front of them. On the one hand we all know that in the agile world speed is everything, but it is forgotten that the QA is sometimes simply too much burdened. We are only human beings!

    Even if the project owner is looking for solutions and everything was reported in the retro, not everything can be solved. Depending on the size of the project, you always have a bottleneck somewhere, and yes it is somehow desperate!



  • I found myself in that very same position last year. With a team of around 12 devs and only 2 QAs we used up all our time testing features and couldn't do the (IMHO) more important work of improving processes, making our test infrastructure better and also carrying out some more long term initiatives that would improve things from a quality standpoint. Plus on a more selfish side it was a pretty lousy career wise as we couldn't do other stuff.

    What I did was take the discussion with my manager, I explained the problem and how serious it was then tried to come up with a roadmap to fix it. With my manager's approval I got the entire team together and started working on the whole "shift-left testing" approach.

    1. First the two of us trained every developer on how to read our reporting tools, how to run tests and what to look for when a test fails (to detect flakyness, automation bug or actual defect).

    2. Then I created pipelines and scripts to make running tests as simple for the as possible. We event went as far as trying to link everything with Slack so a regression could be triggered from chat... but had to drop this due to other limitations with our Jenkins connection.

    3. Developers would be in charge of testing their features. At first they would send us PRs for unit/int/e2e test cases so we could check they were covering all important scenarios.

    4. Alongside that we kept working on our frameworks and code to make everything as easily to expand and understand as possible.

    After about 6ish months of that the team does task estimation by taking testing into account and they are comfortable enough with our frameworks that everyone can review QA/test PRs so I'm not even the bottleneck in that (the other guy left so, I'm the only QA). The best part is that I hardly ever write tests anymore... I write the infrastructure and frameworks we need and keep pushing for CI/CD and only get down to writing tests when some huge feature comes along.

    Your milage will certainly vary and I was really lucky to have a team that wants to take ownership of every aspect of the features they make so they were pretty onboard with the transition but that is how I handled that very common scenario.



Suggested Topics