Where to start with introducing a testing framework
This I hope won't come across too broad of a question.
If you've been tasked with introducing testing process' for a web application developed in an 'agile' environment, released in 2 weekly cycles. but currently there is no framework in place. No documentation of existing features, test plans or much else for that matter. Then where do you start?
I've been thinking about this and I think the following are good starting points to consider:
Identify existing features: Rank for risk of breaking due to change and impact of breaking.
For high/medium impact/risk features: Identify how features should work. Create test cases detailing intended functionality. Create data to test. Identify where, when and how test should be done. Add to test plan.
To clarify, the objective is to introduce a framework which can be used for generating functionality tests for new features and regression tests for existing undocumented features. But not to test and document things for the sake of it.
I'm going to go out on a limb here and say that what you have is an ad-hoc development process rather than an agile one.
Here's where I'd start, assuming that you have the ability to work with the programmers and project/application management on this (you can get a lot of it even if you don't have that ability).
- Who - Who is the intended and actual user for the application? The user base creates its own requirements by virtue of who they are: an application used by trained people in an intranet environment has different requirements than one used by the general public on the internet at large.
- What - Survey the functionality of the application - I'd do this in two major passes: one for vertical slices (features and modules), one for layers (presentation, business logic, data storage etc). This is also where I'd start mapping out what information needs to be passed between features and between layers (In this context I'd consider a module to be a collection of related features - for instance in a web store, there'd be the catalog module for browsing products and the cart module for making purchases. Features in the catalog module might be a product search or product filterinig. In the cart module some features might include one-click orders for logged in customers who've saved their payment and billing/delivery information, support for different payment options, and so forth).
- When - Find and map out time/flow dependencies. This would also cover the most common paths through the application, something you need to know because the problems most likely to affect your users will be in those common paths. That would be where you'd want to start.
- Where - The environment where the application "lives" has an impact - as does your test environment. You're going to want to mimic the live environment to some extent, but in a way that gives you control over the application for testing purposes. You're also going to want to document any aspects of your test environment that can't mimic the live one and the possible risks associated with it (for instance, test environments frequently don't employ SSL).
- Why - what is the underlying purpose of the application? Every app, big or small, exists for a purpose - to solve a problem (for a definition of problem that amounts to "do something someone wants" or "do something better than anything else out there"). Knowing the purpose gives you somewhere to aim.
That would be my first pass. After I'd put together something with the information I'd gathered (format optional - you can use mind maps, bullet points in a document, spreadsheets, or something more formal - whatever works for you), I'd start digging into more detail:
- Previous failures - If there's any kind of issue tracking going on, survey it. You'll want to know what kind of problems have been reported, where they cluster (this gives you an idea of where the more fragile areas of the application might be - or where the most heavily trafficked areas are). You might even get lucky and find information about the impact of failures - including financial impact. This can also give you a feel for what is considered a serious issue with the application and what isn't.
- Module dependencies - In my experience, there are always dependencies and assumptions. For a web application these include the browsers it works with, connectivity and the like. Modules will often depend on user login information being stored somewhere accessible. If there are any modules that shouldn't be accessible until some other activity has been completed, you need to find them and note this - because they'll usually depend on information the previous activity has provided.
- Feature dependencies - this works similarly to module dependencies, but tends to require more detailed information. If you have them, application help files/pages can be useful for this. I've gone through the process of surveying a large application to dig the information out myself and it's not something I'd want to repeat if I could avoid it - but sometimes this is the only way you can get the data you need.
- Development process - You need to know how new features get added to the system, and where your role sits. It's almost certainly not as a gatekeeper: testers generally provide information about the state of the system in test, allowing people closer to the business needs to decide whether or not it's ready to go live (which, I would note, will not prevent it going live in a state where it's not ready - there can be contractual requirements, opportunity costs, and a whole lot more that is outside tester-world that weigh more heavily than the potential costs of problems that reach users).
- Internal communications - As much as possible, you need to be talking to the business-focused people (usually project and application management) as well as the programmers and any other testers. It saves a lot of misunderstandings. I've had a lot of success telling people "I'm lazy. I don't want to have to do things over - I want to get them right the first time". Also, the people who worked on a feature will have in-depth knowledge of how that feature is supposed to work - that makes them invaluable resources to you and your team.
This is by no means a complete list - it's more of a quick and dirty set of ideas that can be used as a starting point.