Testing an internal shared library which is used by more than one product
We shipped a product. We then got a request to develop another product which would have its own release cycle. It was decided to start this development in the same branch as the first product we shipped, and use its core engine (with some changes to core required, and the rest written as 'extensions' mostly as separate assemblies). We then shipped this product. At this junction, we would like to try to avoid breaking up the two products into their own development branches, and instead keep it all under same branch. In this sense, we would treat the core as an internal shared library to be consumed by the other products. Almost like a third-party library Question 1: If we are currently developing ProductA, and need to make changes to the core engine for a new feature, do we also need to test ProductB at the same time in order to make sure no breaking changes have been made which negatively impact ProductB? Note that the two products may be in there own release cycles, however this does not mean that these cycles will not overlap in time. I guess what I am trying to achieve is: Code re-use and some elegance to the design and architecture (instead of different full copies of the code with their own implementations) Do not branch if I do not have to Avoid a constant 'break-fix' cycle where making change to core breaks something for the 'other' product which we may not start on until months down the road. Finding it NOW will allow us to adjust the core so that it adapts to all other products that use it Question 2 - Is it too much to ask that Test should treat the 'core' as a product in its own right? What types of test - basic 'smoke tests'? Integration? or should the full suite of Functional tests for each product be run each time a change to core is made? Question 3 - Is this not something we should be asking of QA? Should this really be only handled by developers and THEIR tests (unit, maybe some integration tests)?
Question 1 - Regression testing the product you aren't working on is essential because both products are using the same core code. This is where you'd want to have a good automated regression library so you can know if a change to the core engine for product A has negatively impacted product B. Ideally, you'd be running your regression on either a per-build or per-day basis so that you have no more than 24 hours lag time between a breaking change and becoming aware of it. Question 2 - In this case it depends on how much of the interaction between the core engine and the product using it is exposed. If the engine has an API you can exercise without spinning up the products, you can build regression that exercises only the API and limit your product regression to end-to-end testing to ensure that changes haven't negatively impacted flow through the applications. If you don't have any visibility into the core engine, your test team is going to find it very difficult to treat it as a product in its own right. Question 3 - I would say that it needs to be covered by all parties. Developer unit tests and integration tests will help to catch any breaking changes quickly, but without the regression tests checking end-to-end functionality and (if at all possible) API functionality of the core engine there are still business cases that can be missed. To give an illustration, at my previous employer I worked with a suite of applications that handled specialized point of sale and web store transactions (among other things). There were multiple independent modules that exercised the tax calculation engine, but due to the specialized nature of the modules, there was no guarantee that a change that worked correctly in module A would do so in modules B, C, D... Unit and integration tests couldn't cover the scenarios because of the way information was passed through the system. As a result, the test team built and maintained extensive tax regression automation so that each possible tax scenario was exercised at least once in each module.