Any good test plan/cycles or experiences without using crowdsourcing?
I'm trying to rethink our test cycle that we use at my company and am wondering if anyone could give any leads, examples, experiences, etc. to how you test without the use of crowdsource data. The products we make is strictly for professionals whose time used on our products is limited and expensive, so a crowdsourced data model could cause a bad reputation like we've seen with our competitors. Our tests require tons of manual testing in a variety of permutations for a single function, and I'd like to make things more efficient. Think of it like this: main test 1:"test graphic functionality" test 1: test on windows sub test 1a: test on 32-bit machine sub test 1b: test on 64-bit machine test 2: test on mac sub test 2a: test on 32-bit machine sub sub-test 2ai: test on 32-bit machine using a "example add-on" sub test 2b: test on 64-bit machine We currently use JIRA and I'm looking into Zephyr right now, but not liking it too much because of its lack of modularity (ie. once we come up with more products and as systems get more advanced, adding sub test cases become more difficult to manage) Edited in response to answers: thank you for the input Stacy and Kate! I mulled over this a bit and thought on the other hand, it may be how we are organizing each project and how we are creating test cycles that could be the reason to my conundrum. Currently for each 'app' we release, we support functionality on many different 'platforms' (like having support for 4 different OS's), and then for each OS, we allow support on 32 and 64 bit OS's. We also port our 'app' to be run on 4 drastically different 'programs' on each OS (and each program has new versions coming out all the time). And as if that wasn't enough, each 'app' can run at 4 different quality 'resolutions' (ie. standard def, high def, etc.). All while expecting each permutation to have the same functionality and quality. Unfortunately, it may be due to the high bar we've set for ourselves that we are stuck at having to test every single permutation individually to make sure one of those permutations doesn't make our release a stop-ship...And seemingly, there's always something going wrong with each release at some weird combination of the aforementioned (be it regression or otherwise), so it's hard to cut corners for any of these requirements. And with the future fast approaching, I can only imagine more permutations emerge to the point where it becomes too much to handle. And that's where I'm really stuck at. So I'm trying to brainstorm ways to organize all of these releases into projects and test cycles and so on since this high standard we set for ourselves has made all these 'feature-like' combinations 'necessary' for our end users to be happy. Thoughts on that?
I can't speak to JIRA or Zephyr, but my experience with TestLink is that you can define environments and easily copy tests - or nest test suites indefinitely deep. I'd suggest you rethink some of your layering: for instance, with the example you've given, look at a test suite for environment with the following structure: Feature X Test Suite Windows Test suite containing tests a: 32 bit; b: 64 bit; Windows 8 etc. Mac Test suite containing tests a: 32 bit; b: 32 bit with extra add-on; 64 bit Linux Test suite.... You should be able to copy or share tests where the test data is the same between different test suites. The next thing I'd recommend is looking at automation for end-to-end functional testing. Obviously you're not going to automate look and feel, but you should be able to automate login and access level testing as well as all the common functions used. For that, there are any number of solutions out there ranging from free to extremely expensive. Selenium is very popular for web-based environments - as you might have noticed there are tons of Selenium-related questions here. Any good test management tool should be able to integrate with automation, although the level of integration varies. Some of the big box solutions like HP Quality Center with QTP, SmartBear's QA Complete with TestComplete, Microsoft's Team Foundation Server with Visual Studio and Test Manager give full integration. Others you have to build plugins or use workarounds, but all of them offer some level of support. The way I'd look at automation in your situation is to have a set of predefined test environments, preferably set up as virtual systems over which you have complete control. The automation should be able to check which supported environment to run against, restore whatever database you're using to a known starting point, perform your tests, then check data changes against the application database before cleaning up and leaving the system ready for the next step. To get all your environments, you're probably going to need more than one tool - I've yet to work with an automation tool that runs on both Windows and Mac, for instance. I hope this helps you make some decisions.