What approaches to testing are recommended an agile environment?
In an agile environment, for sprints for small updates, what approaches to testing are recommended? Do we just do exploratory testing after the SUT has been built before releasing it or do we do both exploratory testing and run through some functional or regression test cases? Additional Info: We only have test cases for manual testing. We don't have any automated test cases yet as of this time as we're still working on creating them for future sprint. UPDATE: Thanks for all your input. Unfortunately, we do not have any automated test cases at this time as we're still creating them. Due to this, what we on the past SU sprints was manually test each task as soon as it gets to the ready for testing column then after all tasks are done, then once the small update is built on the 2nd to the last day of the sprint, everyone just did a manual and exploratory testing of workflows and some critical functional test cases. During downtime in the sprint whenever there's no task available in the testing column, testers just ad hoc around the application. We have about 7 testers in our team so its hard to keep them all busy.
briley last edited by
As Suchit said, it depends a lot on the nature of the update. I've seen a one-line code change trigger a full regression because that one line happened to be in one of the core calculation engines. My suggestion for any development process, whether agile or not, is to have multiple suites of automated regression tests running on at minimum a daily basis. These tests should cover the core functionality of the application (or applications) and run against the most recent build available. That way, no matter what happens in development, you have no more than 24 hours between someone introducing a regression to core function and it being detected. If you don't already have that level of automation, it will take time to build to it. In that case, I'd recommend getting a small slice of core function, preferably one that's used all the time (logging on is a good one, because you'll need that later anyway) into automated regression, then build on that. The automation process should ideally be its own development project - although I've yet to be fortunate enough to work somewhere where this occurs! If you have no automation, while you're building it, I'd suggest identifying your steel thread/happy path test cases for the application, as well as which of them are going to be used in the process of getting to the new or changed functionality (for instance, if your application requires a valid login to function, every test will start with logging in, so there's no need to specifically regress that) and build a lightweight manual regression to run every time until you can get the automation in place to remove the need. In software, short of fixing typos in captions, there really is no such thing as a trivial change. I've seen a misplaced "not" break an entire system. If you've got comprehensive automated regression running daily, you can focus your manual testing on the areas most impacted by the change with exploration around it and know that any other critical regressions will be found by your automation. It's really a matter of best using the resources available to you. This disconnected ramble brought to you by insufficient caffeine and too early in the morning - I hope there's something here you can use.