Non-functional test, in which CI test step is it best to use?
Within the Continious Integration stages, within this staging development, it is clear to everyone what is to be done where. Unitest in development, sonar if required, or RESTAPI testing via SoapUI.
- various forms of automated tests
- user acceptance testing (UAT)
Smoke tests and user acceptance testing (UAT) can be performed in a staging environment. Smoke tests check for essential service functionalities, and UAT tests are performed from the perspective of an end user. So far so good, but when should one not define functional test procedures within which test level?
Let's take load and performance test, which I can use after the developer stage in staging to simulate certain load scenarios in advance. At the same time, I can perform the same procedure weekly in production as a weekly performance check with information to stakeholders. Penetration test, on the one hand I can already identify security issues in the developer stage with SoapUI, and OWASP on this level. But also in the staging environment or of course what would probably make the most sense in production. All in all, a question of definition when I should use the non-functional means.
- A question I would like to ask you here. Who made the best experiences with this kind of testing?
- When do you use your non-functional tests?
- How did you set up your CI / Staging environment?
- Which single steps do you use when?
With best thanks Mornon!
The goal of testing is to prove that certain kinds of problems are absent from the product. But what if a test fails and uncovers that there is a problem? You want to know that as soon as possible, so that the problem can be rectified without costing the company too much money in terms of downtime, lost revenue due to late delivery, etc.
On the other hand, the different deployment environments also exist with a reason. The development environment may change too frequently for some tests, or it may have a too limited data set to work with. And when testing in the production environment, problems may be found too late and you certainly don't want to run tests there that might corrupt or destroy valuable data.
You have to find a balance there, and that balance does not come in the form of a blanket statement that all non-functional tests belong to environment X. Rather, for each (set of) test(s) you should determine in which environment it makes the most sense to execute them, given the tooling (or lack thereof) that you have to perform the tests and with the goal of finding problems as soon as possible without being hindered by the characteristics of the environment.
For example, tests that are performed by static code analysis or fast automatic tests can easily be done in the development environment. Running those tests doesn't take any effort from the testers beyond possibly setting up the tooling to run them automatically.
On the other hand, if the tests need to be done manually or they take longer to run, then the development environment may be too fast-paced for you and it would be better to perform those tests on the more stable staging environment.
Testing in production should be avoided as much as possible, because if you find problems there you are, by definition, too late. The main exception would be tests that depend on the exact configuration of the machines used in the production environment, like penetration tests.