Systematic approach to test case generation
I have two legacy applications with huge source code base which should get integrated into one application. As my knowledge of the domain of the applications is quite limited and there is no testing documentation (only exploratory testing was done till now) , I want to prepare a set of test cases to cover existing application functionality. Afterwards I want to use these test sets after the merge of applications to assure that from functional point of view new app works in the same way as old apps and the changes in code didn't cause any regression.
I researched on this topic and found some methodologies which might be usedful:
ACC (attributes - components - capabilities) - this is rather manual process and depends strongly on the experience of a test engineer (https://code.google.com/p/test-analytics/wiki/AccExplained)
MBT (Model-Based Testing) - test cases are automatically generated using specialized tool, test cases quality depends on the quality of a model i.e. the things we didn't model won't get tested
BDT (Behavior Driven Testing) - test cases will be derived from user story acceptance criteria, again it's a manual process
What would be the best methodology to use in my case ?
Sure you could use any of these or others out there or do your own. To start I think you should ask a few questions and get those answers.
- What test output is needed? This could be...just test it and call it good or it could be some kind of report which would indicate the test needs to produce results that match that criteria.
- How can I functionally breakdown the applications into something tangible? This could be workflows/use cases/requirements/test objectives/code breakdown etc...
- What is the main point of each functionality? This usually involves the market of what the product is designed for in the first place. This is your "user focus" question to make sure your testing is lining up with the users expectations.
- Is there any process that must be adhered to? Based on your feedback of your applications so far I think this answer is no, but it's always good to make sure your testing lines up with a standard delivery process so that you can schedule proper testing along side development. I'm sure someone at some point in the life cycle development has release criteria.
After you have all of that information you can see what "artifacts" will be most important for you. By "artifacts" I mean anything that stays around to help in the future. So Test Cases(manual/automated), Requirements, Use Cases, Work Flows, Bug report summaries, functional breakdowns, etc...
Your chosen method will then be easier to choose once you know what you need to end up with. Then you can start by creating those artifacts for the existing applications. Then merge those artifacts together for the future application. Automated tests and manual tests would be able to be extrapolated from the information at this point.