BDD in Validation: good tools?

  • Context: For 2.5 years, I wrote Java-based desktop software at software company A (the supplier). The software also has a server and a web component. The software was custom-built for company B (the client). After a short time not being involved, I left company A and joined company B.

    The software is being developed in a scope-to-budget kind of way, where business analysis is done by the supplier. There is no clear list of requirements, and the relationship is very open and trusting.

    The problem: A new module of the software is currently in its client validation phase. The client does free-testing, but is unable to provide a conclusive overview of

    • what has been tested; what specific test steps were used?
      • Currently, it is very difficult to write a good bug report. "I cliked around and suddenly: error message!"
    • what are the zones which were not tested (due to blocking bugs), and what did we plan to test?
    • did we understand the functional scope correctly?

    As the software was developed scope-to-budget, we do not even have a good overview of all features that have been developed. Currently, there is a constant back-and-forth between client and supplier. The client does free-testing, discovers a 10-odd number of bugs, some of which are blocking. The supplier then delivers a new version, after which the client does some more free-testing and discovers more bugs (some of those could have also been found in the old version).

    The solution: I want to solve this issue by introducing Gherkin-style tests, which may or may not be automated. The goal is two-fold:

    • Force us (the client) to describe how we think functionality should work. This ensures we are indeed reporting bugs, and not just misunderstanding the functional scope.
    • Make it clear how well we cover the full software

    My question: Are there any good tools to manage Gherkin-syntax (Given-When-Then) test cases in manual client-side validation? Something which can manage Features, and all of the different scenario's? And ideally, can easily record the result of a manual testing run ?

  • Great idea a manual test runner and reporter for Cucumber.

    Seems someone has similar ideas and put it online:

    • Cucumbumler, see and http:// /manual-cucumber-tests/ [as of 9/9/2019 the site is infested with a trojan. If/when the site is cleaned, this edit can be reverted] - for a concept

    If this doesn't work out of the box, it should not be to hard to create something similar from scratch. You can catch each Given/When/Then and let it ask a question on the console to verify "did it pass?" yourself, with a little bit of coding effort.

    If you need to cut up the test runs, because you have multiple testers in a single test run. You can let the runner output the results in JSON instead of HTML and later combine all the JSON files to create a single report with cucumber-reporting. I have successfully used this technique to report on parallel runs in the past.

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2