How to implement qa test process for large scale application?
irl last edited by user
There is a large scale application with thousands of functional points and which never had any QA team or person. So when a QA person or team starts working with this type of product what should be done first? How should they implement and improve the QA process?
- Developers test their code.
- If any bug is reported by clients or the product owner, it's fixed and retested by developers.
- They don't have any proper docs.
This question was asked in a written interview. Unfortunately I was unable to explain because I had never faced this type of problem in my real life. I will be glad if anybody can help me to answer this question.
jeanid last edited by user
This is - sadly - rather more common than anyone here would like. It's where I was when I started at my current position: two major applications, both stable, but the company has never had dedicated test specialists before.
The first thing I did was make sure that everyone knew there weren't going to be any quick changes. No matter how skilled a person is, they need time to familiarize themselves with the software they're testing, and anything large and semi-documented is going to have lot of implicit requirements that everyone "knows" but have never been formalized.
From there, I took a multi-pronged approach, consisting of:
- Becoming familiar with the application(s) and requirements by reviewing any documentation I could find, exploring the application(s), talking to developers and project managers, and working through any documented regression.
- Testing bug fixes, initially with the other team members checking over what I'd done to confirm that I hadn't missed anything (the software I work with has more possible combinations than can be tested, and early on I had no idea how many options could impact what I was testing). That also built familiarity with the system.
- Documenting anything I felt needed more explanation. My rule of thumb for whether I document something is to ask myself if I'm likely to need to remember this later or if someone else could find it useful. If I answer yes to either of those questions, I document. If I'm not sure, I document. Typically, I'll use bullet-point notes and park the documentation I create somewhere accessible to the whole team, then invite them to update and modify as they see fit. These notes can - and do - become the core of support documentation, particularly on projects to add functionality to the system.
- Build a repository of test data. The exact form of the test data varies depending on the application. For windows applications I'll often end up with a collection of databases and data directories. For web applications it varies - the web application I support in my current position is designed in a way that separate repositories of test data aren't practical. Instead I have master lists of which customers use which settings and use those on the test site.
- As I become more familiar with the application(s) and the issues that arise, I start building a regression master listing. The one I have now is in wiki form on the company intranet, with a page for each module that lists the configurations that need to be checked when changes are made to that module, and which modules are impacted by changes to that module. The cross-linking is... interesting, to say the least. (For instance, changing whether or not an organization is defined for Option A changes the flow through the new hire wizard, the layout of the payroll data entry module, several menus, the format of the data sent by the payroll submission module, and a number of reports).
- At this point - which could be anything from a few months after hire to well over a year - I've got enough knowledge of the system to build an automation plan and start working towards automated regression.
- While I'm doing all this, I start by plugging myself into the existing processes, usually as a tag-along at the end. Once I'm familiar with them, I'll make suggestions to improve things, which will typically be implemented in an incremental approach.
- When I started here, much of the issue management was handled by spreadsheet, there was no consistent version management, and the two sub-teams used completely different processes and tools.
- The first step towards a consistent process was getting the issue reporting and management onto the same tool. That's now almost done.
- The second was getting version control consistently in use. That's now done.
- The third was building a full set of development, test, and staging web environments that mirrored the production web environment as much as possible. That's in progress.
- The team process is continually evolving. Initially, the driving force was largely me and my manager (I'm the sole test specialist for a team of ten). Now the whole team is involved, and when any of us find something the process isn't handling, we look for a way to improve it.
- For the desktop application, I wound up building a set of virtual machines to cover the environments I needed to work with.
Obviously, exactly what you do in any given situation will depend on the existing processes and the nature of the application in test, but these kinds of approaches are general enough to be adapted to most environments.