Strategies for automating testing on features that are only on certain environments
Say we have a dev environment, an integration, a pre-production and a production environment.
We have features A-G that are on everything, while new feature H is on dev+int only, so far.
We want to run our nightly tests across all environments (aside from production). We have a core set of 'reliable' tests that run, however, as we add new ones to it for feature H, it means that they'll be failing on pre-prod, as H isn't there.
How to mitigate this?
I see one option being having 'feature' tests that you turn on per environment, although given we currently tag with @proven tests or similar and pull it into Jenkins, I'm not quite sure how to handle this. Still mulling.
There are several ways you can work with this situation.
- No-work - let the tests for the new feature fail until the feature is migrated to the staging environments. This can be an option if the tests don't block builds or aren't reporting directly to management.
- "smart" detection - as Dale suggested, use some coding to detect whether the feature is detected and log a warning or message if it isn't present. This keeps the visibility of whether the feature has migrated without failing the test run.
- environment-specific configuration - as Dale suggested, have some configuration file that defines the features available in that environment and prevents running of tests that don't match the environment. This is a bit more work because once the feature migrates, you have to update your configuration files to enable the feature tests.
- versioned tests - this is similar to the environment-specific configuration, but leverages some flavor of version control system. If your test base is kept in a source control system, you can branch or snapshot your tests on a per-version basis and set your environments to pull the tests from the appropriate version. This also requires manual updates as version changes are made to the environments but can be very useful if you're having to run your tests against multiple versions of the application (my last position, I was maintaining three versioned branches on nightly automation runs - the development branch, and the last two release branches - plus at-need runs of another three branches - emergency releases only, and only if the changes touched core code. This used a lot of virtual machines configured per major version which would be spun up at need).
It doesn't really matter which option you choose as long as it works for you and your team. My experience is that the more complex the application and environments in use, the more complex the setup for automation needs to be. At my last employer, there were generally three major versions of the system in active development: the development branch where new features were built, the current major release branch and prior major release branch ("maintenance releases" - bug fixes only). In addition, older versions would get emergency release branches for critical bug fixes (which, depending on the customer, could be anything from problems with financial reporting to irritating but not critical problems which the customer was sick of dealing with).
Whatever you choose, be consistent with it and make sure it's documented so everyone in the team knows how things are working.