Making sense of performance regressions



  • Imagine a situation in performance regression testing where commit A is very deleterious to page performance, but does not trigger a failure - it's just under the level required to do that.

    Commit B only degrades page performance a very small amount, but added on top of commit A, triggers a failure.

    In this situation commit A is the one that requires attention, not commit B.

    What are some techniques to deal with this issue?

    (As an example - you are testing page load times and have a 3 second limit. Your baseline was a page load of 1 second. Commit A pushes it up to 2.9 seconds. Commit B pushes it up to 3.1 seconds)



  • It's not exactly clear what question you are asking, but let me take a stab.

    I would deal with it by creating a bug report. In it, I would mention what you are seeing in Commit A, and Commit B. I'd mention that the combination of the two pushes performance past the prescribed limits.

    From a QA point of view, it's not important at all in which Commit the "blame" lies - only the fact that you now have a failure according to your current definitions. Let the developers debug both Commits and draw the proper conclusions.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2