Building trust in automated front-end testing

  • I recently came across a Google Tech Talk during which the speakers seemed to stress that trust in a test suite is crucial to the overall software development process. Here the link.

    Can anyone share their opinion on this? If you agree that trust in a test suite is important, what do you do in your team to build that trust? I thought of asking the devs to sit down & basically code review whatever tests I wrote for the features they originally wrote. However, I'm not sure if that's the best way.


  • elefont,

    Trust in a test suite ultimately comes down to a few things:

    • does it miss important problems it should have caught?
    • does it report errors that aren't really errors?
    • is it brittle?
    • does it report useful information? (this is the most important factor, in my view).

    Dev code reviews can help, but they're probably no better or worse than in-team code reviews - part of what you're looking for with this is to have an automated test suite that runs regularly, is reliable, and reports useful information. Exactly what constitutes these factors will vary depending on the organization and the team.

    Some things you might want to consider:

    • how hard/easy is it to add new tests? Can you add new tests to your regression suites every time a bug slips out to the customers without needing to spend ridiculous amounts of time coding? (if you can do it with a few lines of data in a test data file, more power to you)
    • how hard/easy is it to update the tests when the application GUI changes? If you're effectively rebuilding your tests every time someone changes the tab order, you probably won't have a high trust test suite.
    • how hard/easy is it to update the tests when the process flow through the application changes? If it can be done quickly and easily, chances are your test suite is going to be more trusted than if it's a major rewrite.
    • what data is your test suite communicating and to whom? Some things to consider outside the number of tests run/passed/failed are known failures (where something is reported but hasn't been fixed yet), coverage statistics (which may or may not be useful to your application, depending), an overall summary of tests run/passed/failed over time (an ever-growing number of tests which mostly pass tends to build confidence).
    • What data are you communicating and to whom? No matter how good your test automation is, people are going to look at you (and/or your team, depending on the structure of your company) first, and impute trust based on how much they trust the information you give them.

    None of this is "you must do this" - it's food for thought. Any method of building trust you come up with will be unique to you, your team, and your company.

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2