Comparing test coverage metrics to identify an increase/decrease after implementing automation
Bogopo last edited by
I would like to compare the levels of test coverage for projects before and after implementing an automation framework. I would like to know if there has been an increase or decrease in test coverage.
I collected metrics for all projects in regards to test coverage.
Would it just be the avg. test coverage for similar projects before vs. avg. test coverage for projects after implementing the automation framework?
Would it need to be weighted?
How is this best achieved?
Test coverage and implementation of automation framework are 2 different things. According to me they are not linked directly.
Automation frameworks won't write the test scripts on their own. People will be writing the test scripts, which they will later execute using the automation tool/framework. The test coverage of test scripts will be as good as the domain knowledge, experience, thought process and test ideas generated by the people writing test script.
Yes, a plus point of using an automation framework will be that when you're executing repeatable tasks time and again or for regression testing, people may miss out something, but the scripts will do what they are programmed to do each time in the exact same way.
So maybe you may find it helpful that the tool/script is doing the tedious repetition which gives your test people some time to relax and refresh their minds so that they can think better and review the coverage of the tests. They can generate new ideas and improve the existing scripts to gain more coverage.
You can compare this by the way you mentioned. Try a project without automation and then try same or similar project with automation and see if it gives your team more time to think and improve.