What are the proper metrics and methodologies for measuring how well a QA department is doing?



  • I'm trying propose meaningful changes to measure how well the QA department in our company is doing. I've gone through similar Stackexchange pages that go over measuring a teams/individual's performance but most of them tend to move in the direct of - "metrics are bad". I agree with that approach to a certain degree, but there must be a way to measure how well QA department, the QA teams and QA Engineers are doing.

    Here's what we currently measure (not a comprehensive list)

    1. How many functional test cases have been automated - Regression Automation coverage by different teams.
    2. How much increase in Automation coverage has been achieved each quarter
    3. Number of defects identified in production
    4. Releases that were done on time

    The problem with the above metrics is that the numbers although look good on paper and makes the senior leadership sleep well at night, they aren't of value to me. The Functional Automation numbers are calculated by a check box field in ALM & Jira for each test case (Automated: Yes/No). So teams happily misreport to show a gradual increase.

    The leadership set 80% automation goal by the end of 2018, voila - mission accomplished.

    Most of the functional automated test cases are unstable plagued by monolithic architectures of the apps and services they meant to test, environment issues, lack of stability of test cases (poor coding standards, no code reviews), overburdened QA engineers with back to back projects with no room for self reflection so on and so forth.

    I have an opportunity to propose meaningful changes. But I'm struggling to find the right methodologies and metrics. Using the wrong metrics can be dangerous and discourage good engineers. But the leadership wants to grade all QA Engineers, managers and teams based on some sensible metrics. Operating in a metric-free regime appears to be out of question.

    Here are some possible metrics I can think of. Are there any potential pitfalls with them and how do wego about measuring them?

    • Are we (QA department) doing better or worse compared to the previous quarter - I'm guessing that the number of PROD issues might serve as one of the ways
    • How well is a team adhering to the QA Test Pyramid (Working towards the right mix - Unit/Component (bigger percentage of total tests) , Service, UI and Non functional tests)
    • How well are the test plans designed
    • How to identify and recognize individuals/teams that are doing the right thing
    • How good or bad is our hiring process Are leaders doing the right thing

    I must be missing a few other important areas. Please let me know what you think.



    1. How many functional test cases have been automated - Regression Automation coverage by different teams.

      • first you must have a requirements traceability matrix to check if all your test cases cover all your requirements, then you may proceed with this. BUT, this doesn't say anything about the performance of a QA
    2. How much increase in Automation coverage has been achieved each quarter

      • why? you only increase automation coverage if you add or change requirements.
    3. Number of defects identified in production

      • and after this you'll be getting bugs like "This doesn't look good on chrome", "error message should be bold", "blah.. blah"
    4. Releases that were done on time

      • Not the fault of your QA if he/she saw a bug and halt release schedule.. enforce quality not quantity. This will surely increase your item #3: Number of defects identified in production

    Are we (QA department) doing better or worse compared to the previous quarter - I'm guessing that the number of PROD issues might serve as one of the ways If your production bugs are less to none (except ofcourse some edge-case bugs), then.. Hell yeah we're doing better compared to previous quarter!

    How well is a team adhering to the QA Test Pyramid (Working towards the right mix - Unit/Component (bigger percentage of total tests) , Service, UI and Non functional tests) - Depends on team's development schedule and your QAs output

    How well are the test plans designed - This is on you bruh! 😛

    How to identify and recognize individuals/teams that are doing the right thing - If their products have less to zero bugs in prod

    How good or bad is our hiring process Are leaders doing the right thing - In the interview, stop asking some ms.universe questions or some very specific syntax.. focus more on their strategy/logic, how would they handle the testing if they were to test it.. how would they manage their time given the devs are 100 and the QAs are 3 😛 and last but not the least tell your technical recruiter to stop googling all the java/python related languages and just ignorantly pasting it on job requirements 😄



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2