Performance-testing systems on virtual machines that normally run on physical machines
My employer runs some of our systems on physical machines with attached hard drives. I am charged with performance-testing those systems. For cost reasons, I've been asked to test those systems running on virtual machines (using Xen) attached to a SAN. This is clearly not an apples-to-apples comparison. Some systems use a lot of disk I/O, and so the SAN issue is especially worrisome. Rather than responding with "can't be done" or "not reliable", I want to recommend what is possible.
Here are some things that come to mind or that I've found with Google searches:
- Measure SAN speed vs. hard drive and calculate a ratio
- Borrow a physical machine long enough to run a benchmark, do the same with a virtual machine and calculate a ratio
- Even if you can't predict absolute performance on physical machines, you may be able to predict relative performance (i.e. whether the candidate release will be faster or slower than what's currently in production)
- Measure multiple times at different times of day to mitigate resource contention issues, i.e. conflicts with other virtual machines running on the same physical machine or with other clients using the SAN.
Are there other things you can do to mitigate differences between physical machine performance and virtual machine performance in an environment similar to mine? I am particularly interested in actual experiences rather than educated guesses.
carriann last edited by user
I can't give a hard list, but I can offer a few pointers:
- I'd definitely start with some kind of baseline against a physical system and a virtualized clone of that system to give a rough ratio of your VM performance hit. It won't be accurate, but it will be a useful data point.
- You'll want to know your SAN configuration - particularly the location of the physical data stores. When I was running a large amount of functional automation against virtual systems, I ran into bottlenecks caused by (among other things): the amount of I/O running through the bus; the configuration of the server to run everything off the one SAN platter; the presence and activity of other I/O-heavy virtual systems on the same SAN platter and bus (like a continuous integration build server and a testing database...); more virtual systems on the thing than it had space for; and many more...
- If at all possible, you'll want to be monitoring I/O and other performance load factors on the virtualization server - preferably a baseline run with the server under normal loads for the time frame you plan to use for the performance testing, then again with the performance testing running.
- As the other commenters have mentioned, you're not going to get results that you can directly compare to live systems running on hardware unless you run your tests against the same hardware with the same configuration. What you will get is an indication of whether build X performs better or worse than build Y and what the application bottlenecks might be (this won't be as accurate either, because you'll have to filter out any impact from the virtual server, but it will point you to potential problems).