How long should a test wait to assume that the result remains fixed
We had a defect where in a situation the state of the application changed into something undesired. It has been fixed, we added a test:
- Set up state
- Action that triggered the wrong state change
- Check state is still the same as setup
- Wait some seconds
- Check state is still the same and nothing changed
My question is: How long do we have to wait to make the test not generate false positives?
There are some situations where if the test environment restarts the services has a cold start of some seconds. The event-queue triggering the state change could be overloaded.
Theoretically the state could still change after a minute, due to the async nature of the services, but I don't want to add a test that always waits a minute or longer, hence my question. Are we aproaching this incorrectly or are there alternative options?
irl last edited by
Great Question Niels.
Given that it will depend on the given situation I would consider approaching this as a data gathering exercise. I would investigate what is the normal distribution from many repeated runs. If 99% of the time it remains correct after 1 minute (i.e. after
1 minute doesn't then change again) and the business considers a 1% failure (and re-run) rate to be acceptable then I would use that. Same calculation for 10 second wait. Get the business to determine the right answer.
Usually I would hope to address this with a polling wait rather than a fixed wait, however in this case that doesn't help becuase of the potential for the answer to change from right to wrong after it has already been right initially.
One additional thought - this may belong to a class of issues that lend themselves to production monitoring rather than development testing. If this issue does happen in production, set up monitoring to capture the occurrence and frequency. With more data you may be able to formulate better plans ar approaches to mitigate the issue and/or change testing approaches to allow for it.