How to combine exploratory testing with test automation creatively?



  • Background :There is an definite value in automated test execution of scripted tests to find regression defects.

    On the other hand there is an tremendous value in exploratory testing and going beyond any scripted tests by designing & verifying tests 'on the fly'.

    Thought process:Both provide value in different forms in a mutually exclusive way however I wonder if there is something we can even get more powerful by combining them further in few more creative ways.

    Example: I tried introducing 'randomness' in long user journey tests by randomly selecting a sub step from couple of available alternative options. Ex. Payment method- in long scenarios we have chosen payment randomly by either credit/Debit/by points/ by coupon.

    This gave us sometimes unique defects /strange behavior from application which we generally don't see.

    Any further creative ideas?



  • You've brought a bit of the human factor (randomness) to automated tests. What if you try to bring some flavor of automation to your exploratory tests?

    In general, I think the problem stems from thinking that test automation is only about automating tests, or to be more precise, automating whole regression tests. No, think rather of using automated tools that you choose just as a carpenter does when building a wardrobe. Or a doctor chooses when diagnosing a patient. Or a data scientist does when looking for certain trends in data. There's a great interview with James Bach about this mindset change.

    You can always automate part of the process and leave rest to exploration and what humans are better at. For example, people are better at critical thinking while computers are better at repetitive boring tasks:

    • to check if your application looks consistent you can automate the process of navigation along all links in your application and automatically take screenshots of every page your process landed on. Then leave evaluation of screens consistency to a human.
    • to check if your HTML emails render properly in different email clients like Outlook, etc. automate the process of showing and taking screenshots of your emails in different email clients. Then leave evaluation to a human. This is what, for instance, Litmus does.
    • to find incorrectly processed data by your backend system, you can automate the process of extracting data from logs (parser tool), group similar data together (Excel tool) and scan manually through data in your Excel to find, for instance, truncated sentences.

    This is what worked for me. For more examples on using tools see "A Context-Driven Approach to Automation in Testing" by James Bach and Michael Bolton, especially the section "Third: Explore the many ways to use tools!".



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2