Is it a good practice to compare responses with benchmark responses in API Test Automation?
Bogopo last edited by
Is it is good practice while writing automated tests for APIs to compare the complete response of the requests with a benchmark response instead of writing specific tests to verify the required values ? What if there are too many values to verify ? Is there any clean and optimized way to do the same ?
Please share thoughts so that I may know about the best standard followed industry wide.
It depends on the service. If I'm testing something that doesn't change, or doesn't change often, I would assert the response against a benchmark response.
For those services where the data may change, I would look to write some sort of validation script to compare the response results with the db.