Can existing tests and data effectively compare old and new systems?



  • This idea was presented to me recently:

    You have a complex legacy ETL system in Java in a TEST environment that no one understands completely. You write some tests that run it and compare the output with your expected outputs hardcoded into your tests.

    But you also run your tests against a new rewrite of the system in a DEV environment and you compare the output of the legacy system in TEST with the output of the new system in DEV. If the rewrite was successful, the outputs should match.

    So the question is, is this 2nd technique a good idea long-term? Short-term, it's convenient because you don't have to hardcode all your application's logic into your tests. You kindof skip understanding the business logic and you get a black-box comparison for free. But long-term, what's in DEV will be migrated to TEST and what's in TEST will be gone. New development will be in DEV. In the future running the suite will just compare this release's development with last release's development.

    Someone proposed to me that this was still useful, but I'm not so sure. Something doesn't seem right to me. Some problems I see are:

    1. You have to keep both TEST and DEV pointed at the same input data all the time.
    2. You are losing the value of readable test specifications of behavior.
    3. As soon as the legacy system is gone, you lose a lot of implicit behavior comparison from the old "source of truth" legacy system. You're now just comparing mostly against yourself.

    Has anyone heard of examples of this or articles on it? I'd be interested if it has a name or hearing about the pros and cons. I couldn't find anything on Martin Fowler's site.



  • Yes this is a useful strategy in converting from one system to another as a one time event (it might span a couple of iterations but the idea is very short term).

    You could effectively view this as BDD with 'half (the tests) already done'. You have the tests against data that works. You create a new back-end version and see if it does the same.

    As I stated I consider this part of a conversion and it would not usually be practical to continue to run and maintain and compare both systems, post conversion.

    In practice what you may often run into is that older test suites have grown over time, often through poor practices such as testing everything through the UI, not having good Unit and Integration tests, etc. Sometimes you may need to start with new tests that are more appropriate for the (probably) agile environment you are now in. Frequently, the real challenge here is explaining that to the business in terms they can relate to and ensuring you have executive buy-in. Otherwise you end up in a really bad place where you are saying 'we need much fewer ui tests' and the business looks at you and say 'no way, we care about the ones we have and won't drop any of them'. It's a long road to travel to show the power of well-written unit and integration tests to support 'less UI tests' but it is a road you must travel as real evidence trumps trying to change opinions.

    Note on "As soon as the legacy system is gone, you lose a lot of implicit behavior comparison from the old "source of truth" legacy system" i think that means you need legacy systems tests for that currently implicit behavior in order to see if the new system meets those requirements.

    At the end of the day I say:

    Convert, Test, and go live with the new system. That just moves the challenge to having good tests and thats a good thing I think.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2