Should environment for performance tests be isolated?
I've been tasked with arranging performance tests for a Web application to measure response time and memory consumption in different load scenarios.
However, our current test environment is shared across many test teams. This is acceptable for functional tests but I guess for performance tests I may get results that will be false. E.g., spikes in response time caused by other people interfering.
Creating a separate end-to-end test environment seems an expensive option now for many reasons, so I wonder whether such environment can give any, even rough picture of environment performance? What kind of tests or performance measure could be investigated with environment that is not isolated?
Taking user246 answer one step forward- it is usually best to run performance tests multiple times and average the results, taking a look at the standard deviation.
Most of us test complex systems where performance is not 100% repeatable and is affected by many semi-random variables and processes.
The trick part is deciding on the magic number of how many repetitions and the duration of each test. This should be decided using statistical methods, and sometimes should be left dynamic and calculated on the fly- you run tests and wait until it has converged enough for your needs (look for Statistical significance in wikipedia).