QA Methodology: Do you retest every change on a new build or only changed code?


  • QA Engineer

    I am leading a new QA team at my company and we are small (Myself and one other person). We are currently testing releases compiled by over 10 developers, so the ratio of developers to QA is skewed. What happens is we are constantly in a release cycle and developers are constantly checking in code that will make it into the next release. The problem with this is that releases tend to get extremely large and require extensive testing. A typical release cycle is over two months whereas we are trying to get this down to under a month.

    One thing we are doing to help with this is pushing for way more automated QA testing. We are getting there, but we are definitely not in a position to reliably run automated regression on every release.

    A major complaint from the development team is that the way we do our STLC is slowing down release cycles. Right now, when we get a build of the software we will do the follow:

    • Confirm installation steps
    • Run regression
    • Test new functionality
    • Confirm bug fixes
    • Push to User Acceptance Testing
    • Confirm no issues with clients
    • Sign off on RC and ready the release for production

    The development team wants us to not run this cycle every time we get a build (multiple times per release version) and instead only re-test items that specifically fail between builds and basically pick up where we left off on the previous release candidate and continue on the new one.

    Since the release candidate comes in a packaged install file, I am a little hesitant to simply ignore the rest of the changes and follow their advice.

    I am new to the field and I am against a team of developers with many, many more years of experience than I have in SDLC/STLC.

    What do you guys think? How do you guys handle this on your teams?



  • I know this problem way too well. There's no "right" answer, unfortunately, but there are some things you can do to help with this problem.

    • Dependency map - do you have a list of application features that have heavy dependencies and tend to break when changes occur in other areas? If you know changes in feature X tend to break feature Y, you know you always need to check feature Y if there are changes to feature X. On the flip side, if feature Z is nicely encapsulated and doesn't break other places, you might not need to test elsewere for fixes to feature z.
    • Consider your breakage history - which areas of the system are most fragile? Which ones are most stable? If you don't feel you've got this knowledge, you should be able to retrieve it from your organization's issue tracking system or talk to project managers and others to find the information.
    • Think about your overall project life cycle - it sounds as though you're working through a waterfall system where the code fixes are thrown over the wall for the testers to sign off on (apologies if this is incorrect). How can you work with the development team to improve the feedback between your team and theirs? How can they make your life easier? You make theirs easier? Both teams have the same goal: you want a good product going to the customers. My experience is that approaching devs with "How can we make this work better?" usually gets a good response. Another good conversation starter is "We could do this in our team, but it will take hours. I think if one of your folks does this, it will be a few minutes work once, and then we will only need a few minutes each time we test it." (This is really handy when asking a dev to add some hooks to make automation easier - but can also be helpful if you're looking for things like context information or a change listing to be sent with each build notification).
    • Can you get a change listing? - if your build notification includes a list of files changed for that build you have a starting point of changes. This combined with the knowledge of fragile and robust areas gives you information you can use to determine whether skipping parts of the regression is going to be high or low risk (or somewhere in between).
    • You are not the gatekeeper - The test team isn't really the decider on whether or not the system can be released. Usually, you're providing information to the project management or product manager about the state of the product. This mindset allows you to tell project/product management, "We can skip testing feature Y for this build, but there's a high risk of problems because changes to feature X often cause issues in feature Y" and so forth. In my experience, even if the test team is supposedly the gatekeeper, in practice they will be overridden - something that I'm all too familiar with.
    • Focus on your role - The actual role of a test team is to provide information to the developers, project/product managers, and in some cases customers about the state of the application. That state information consists of the things that you've tested, the problems you've found and corrected, the problems you've found that haven't been corrected and why, and the things you haven't tested and why. This approach means that the test team no longer has sole responsibility for the quality of the application - it's shared between devs, testers, and project/program management.

    This is a long way from being a complete list, but hopefully it will give you some options to work with.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2