Can anyone recommend a tool for integration tests for multi-machine system?


  • QA Engineer

    I've inherited a collection of desktop applications that currently have (for all intents) no automated test coverage. Generally speaking, these applications are typically deployed as a suite, installed on different machines, and communicate via a complex, undocumented, and proprietary protocol. They also communicate with vendors via socket or serial to send/receive data. I want to introduce automated integration testing for these applications, but haven't found many tools that advertise this feature. TestComplete offers what they call "distributed testing," but after using it in a free trial I'm not real impressed with how it works. It just doesn't seem to have this use case in mind. I'm still working with SmartBear to see if there's something that can simplify what I'm trying to do. My question: can anyone recommend a tool for testing a suite of applications across multiple machines, assuming that a test on one machine might require an action on another machine? I realize I should test in isolation and simulate/mock the other machines, but the communications between the machines is complicated enough that it's a bit of a non-starter for me. I can revisit this if there aren't any great tools for doing this, but it no longer becomes a good integration test in my eyes. I do intend to simulate the vendors' data feeds since they are generally simple and there's no other option since we don't have their software.



  • I'm not aware of a tool that's capable of doing this across multiple machines. I agree with you that TestComplete's capabilities in this regard are less than ideal. The master-slave setup is fragile at best. What I used to do in my previous job (with TestComplete) for a similar kind of situation was: As many of the applications as possible went on one machine. The configuration was more complex as they were designed to operate on different systems, but it could be managed. Wherever possible, communication that involved a common database was tested by checking the database rather than by multiple applications. If multiple machines were necessary, we used the SysInternals tools to invoke the required applications on the other system and treated them as 'dumb' for the purposes of the test: data would be sent and the returned data captured but no attempt made to monitor the remote application. After the test completed the remote application's log would be inspected for errors. Where possible, communication would be sent directly to the target application via the API rather than invoked from a different application. We chose those methods because the more moving parts you have to hook together for a test, the more points of failure you've got. The few times we used TestComplete for this, we found we had more false positives than actual bugs caught, where the API-based and single-machine setups typically caught any regressions in the communications between applications. I know this isn't the answer you're looking for, but it is a potential solution to your issue. The protocol for communications between your applications isn't exactly undocumented - the code is the only documentation there is. You could build your own documentation via code inspection, port sniffing to capture the traffic between applications, any logging of messages sent/received and so forth, and use that information to generate an API-based test (which would be preferable in this situation - fewer possible points of failure).



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2