If you have no endpoint documentation then the things are really bad. I would use the following aproach:
examine known clients which use the api
extract all possible invokations which the client can do
guess what is the client missing but might be supported by server like if you have order/create then there is a chance the server has order/update as well
guess what could be the meaning of data that is sent to the api
guess data types and ranges which are implied
prepared the tests basing on this guess
Or if you have server code
examine server code to extract api
Or if you have server binary
decompile binary
extract api from decompiled code
I like how Micheal Bolton (& James Bach) explains it in their Agile Testing Quadrants:
Investigate Mysteries & Tell compelling bug stories
So no, it is not correct and probably it is worth investigating further. If it happend once it will probably happen again.
So, what we typically call an intermittent problem is: a mysterious
and undesirable behavior of a system, observed at least once, that we
cannot yet manifest on demand.
Our challenge is to transform the intermittent bug into a regular bug
by resolving the mystery surrounding it. After that it’s the
programmer’s headache.
Read more: How to Investigate Intermittent Problems from James Bach's Blog
Bugs also tend to cluster, so figuring this one out, might also catch some other issues. Once it is found you will be rewarded, atleast with a good story others can learn from in the future.
Do try to estimate risks versus effort, maybe instead of pouring in infinite resources, decide how you could prevent or detect it better in the future. Creating better logging and monitoring, hopefully making it easier to solve the mysterie in the future.
One bad request might not be such a big deal, but loss of data that happend once is. Could the bad request lead to a chain of events breaking something else?
Story of data loss:
I once had an loss of data incident that after a short mysterie hunting we called a fluke. We thought it was due to the testing environment using old badly migrated data or something we didn't fully understand. Until we released to production and one of our clients lost the same data. In the end it was cause by a triple click on something where it lost a reference needed for coupling data. When saved the connecting reference would be removed, not just on this save, but on all data.
After it happend in production it took me two full days to reproduce it. Thankfully we had good backup, still next time I would rather spend the two days before releasing it into production.
Here you have Web API Samples from ASP.NET that you can learn on.
ASP.NET Web API Samples
The following code samples, which use the Google APIs Client Library for .NET, are available for the YouTube Data API.
For example the first code sample below calls the API's playlistItems.list method to retrieve a list of videos uploaded to the channel associated with the request.
.NET Code Samples - YouTube Data API
Maybe this would be helpful too while learning:
Let's Build an API Checking Framework
The solution was this: I had to go through the parameters that are where the object JSON receives the type of object of the data will be received.
In my case, I created a class of the type of object coming in the JSON.Public Function booking(ByVal data As Booking.JsonData) As String
You solution in the comment section:
pm.test('Check nested Id data type', () => {
_.each(pm.response.json().values, (topLevelItem) => {
_.each(topLevelItem.values, (nestedItem) => {
pm.expect(nestedItem.id).to.be.a('string')
});
});
});
has a few problems:
if there're no values, those loops won't run, so there's nothing to check, therefore it'll pass
you mention id property, but I see no id property in your example JSON in your question
your check is named Check nested Id data type, but you're asking (from the comment section):
There is some condition in backend, if that condition is true then only "List" will show in response. so, I just wanted to verify that if it's true then it's showing in response.
All in all, people are confused because it seems you're asking more than just one question.
Going back to the problem of checking that List property is in the response. You can check that in JavaScript and Postman like so:
const resBody = pm.response.json();
pm.test("Response has 'List' property", function () {
pm.expect(resBody.Documents[0]).to.have.ownProperty('List');
});
You also might want to check it's an object:
pm.test("'List' property is object", function () {
pm.expect(resBody.Documents[0]).to.be.an('object');
});
And you said "only" List:
pm.test("'Documents' has only one property", function () {
pm.expect(Object.keys(resBody.Documents[0]).length).to.eq(1);
});
All three checks might be problematic because they consider only the first Documents array item. I'll leave the rest to you, you can edit these checks to check all Documents array items if there are more.
I recomment going through Postman documentation, the actually mention all these checks there (and much more) and give you examples. You can check their https://learning.postman.com/docs/writing-scripts/test-scripts/ and also https://www.chaijs.com/api/bdd/ .
A few thoughts:
Automation code should be reviewed, so I don't think that's a con
Not sure why failing test cases would keep your build from compiling. Even if the test code lives in the same repo as the application code, your build scripts should ignore the test code when compiling deliverable code. There might be a failure further down your build chain in a CI system when tests run, but that's separate from blocking the build itself.
You missed the advantage of having the test code in the same repo as the application code in that it's versioned in unison. That is, if you're disciplined, a checkout of any point in history will work as it's using the DTOs from that era, and the assertions will match. If in the future the DTO changes, the tests will get the new ones, and the checkout will have the tests that have been updated to deal with the new versions of the DTO (i.e. extra assertions for new fields)
There's another advantage that having the test code in the same language as the application under test means that developers can contribute without knowing another language or framework. Ideally, if they change the DTO, they'll update the tests they break
In my current shop, the APIs are written in Java. Unit and integration tests are in Groovy (so they can use the Java classes), while end-to-end functional tests are in Python. For most of our projects, test code lives in the same repo as the application code. The end-to-end tests have the exact concerns you mention, even though we tend to use POJOs for almost everything - there's a fair amount of time re-implementing what's already been done in Java/Groovy (especially asserting over the shape of request/response objects).
In any case, try something, figure out what works/what doesn't and iterate.
At some point in their lives, projects begin to require support and testing. To do this, in most cases, they come to the distribution of assemblies on dev - stage - prod, where each element is a polygon with its own version of the code.
dev: developer's unstable polygon, for development. Errors, temporary offline, manual edits and so on are allowed.
stage: pre-prod polygon, for testing and error detection. Errors allowed, offline not allowed, fully automated roll-out using CI.
prod: "battle" polygon. Users come here. Its fall and found bugs are the reason for the decrease in bonuses for programmers / testers / devops.
Accordingly, each polygon has its own database, its own environment, and so on. Typically, this is a collection of separate virtual machines. There can be several polygons of each type, depending on the goals, it is allowed to raise a sub-dev-polygon using containerization directly on the developer's machine.
Also, for each polygon in the repository, its own branch is created, the commit to which should initialize the automatic deployment of the fresh release to the polygon. With prod, as a rule, they do not do anything, but I think this is purely for personal reasons.
You must have access data (username, password or token), request the api url with the data by your controlleruse Zend\Http\Client;
$client = new Client();
$client->setUri('http://api.fooo.com');
$client->setMethod('POST');
$client->setParameterPost(array(
'username' => 'bar',
'password' => 'bar',
));
$response = $client->send();
if ($response->isSuccess()) {
// successo
}
Example of simple loading method without checks and other[HttpPost, DisableRequestSizeLimit, Route("file")]
public IActionResult Post(IFormFile file)
{
using(var fstream = new FileInfo(file.FileName).Create())
{
file.CopyTo(fstream);
return Ok();
}
}
And now we follow our hands in Postman: Name of IFormFile parameter (in example: file) and the value of the KEY field on the record with the loaded strap (done on the picture) We have to match.If these values are different, you'll get null in the control room instead of the flow of the loaded file.I'll add that for manual testing, I prefer to use Swagger, it's a little more convenient, as it generates corrective requests on its own, based on your counteraller parameters.
You need to apply the below areas to your framework:
The body for the POST call should be coming from a model file. Use a java serialization/de-serialization library like jackson or Gson to achieve this, although RestAssured has this feature. Creating complex json becomes easier.
Create a RequestSpecBuilder to create the POST call, so that you can reuse this for each time you are making a POST call.
Right now I have these two points in my mind.
As @Mache says, you search for each value.
var jsonData = pm.response.json();
pm.test("Verify Json values", function () {
pm.expect(jsonData.data.id).is.to.equal(2);
pm.expect(jsonData.data.first_name).is.to.equal("Janet");
pm.expect(jsonData.data.last_name).is.to.equal("Weaver");
// and so on and so on
});
The better option:
Create test cases for each assertion
var jsonData = pm.response.json();
pm.test("Verify data ID", function () {
pm.expect(jsonData.data.id).is.to.equal(2);
});
pm.test("Verify first_name", function () {
pm.expect(jsonData.data.first_name).is.to.equal("Janet");
});
pm.test("Verify last_name", function () {
pm.expect(jsonData.data.last_name).is.to.equal("Weaver");
});
Another option
This will give you a better view on what actually went wrong or is not present on your json response body.
If you really wish to compare the full body, you can create a variable with the expected outcome in a pre-request script like so:
var expectedJsonBody =
{
"data": {
"id": 2,
"first_name": "Janet",
"last_name": "Weaver",
"avatar":"https://s3.amazonaws.com/uifaces/faces/twitter/josephstein/128.jpg"
}
}
pm.environment.set("address", JSON.stringify(expectedJsonBody));
in your request test, you compare the response body with the variable.
I could find the problem!// All clients
app.get ('/users', (**request, response**) => {
const sql = 'SELECT * FROM users'
connection.query(sql, (error, results) => {
if (error) throw error;
if (results.lenght >0) {
**res**.json(results);
} else {
**res**.send('There are no results for your search');
};
});
});
The problem came because I defined the parameters differently... I defined the response as response and then I put it as a res... how it is noted that I am still very nuevito. Thanks for the answer regarding the ports.
Unit tests are the responsibility of dev-team, not QA one. They normally run on service build phase and require a lot of special knowledge (real objects mocking for example)
What QA usually does is use the public interface to ensure the service provides the functionality it declares.
The most known tool for testing REST is SOAP UI, however since REST is acting over HTTP you can use any tool that supports HTTP.
You should also consider performance testing of REST. SOAP UI supports some level of performance testing, however I would still suggest to use Jmeter since it allows to build load scenarios in more flexible way.
What is not convenient is that REST services do not usually expose the interface model they use (unlike Web-Services which act using SOAP) so that you should either push your devs to provide you full specification of service interface or suggest them to use REST service building framewrok like SWAGGER or some alternatives of SWAGGER.
This will simplify the process and let you build test clients automatically on any change in service interface.
That's a broad question. I recommend taking one or two resources and start exploring from there. You can find some ideas about API testing here and specifically if you ask about security, you can focus on OWASP API security
In general, some ideas relating to security:
authentication: what endpoints could be used only when authenticated? are there some that do not implement it correctly? what authentication do we implement?
authorization: are there some resources some users can't have access to? what are they? what users are we talking about?
data leakage: also related to previous two points; do we leak some data through errors? 500 and stack traces all over the response body are common; is it a problem? what data do we leak like this? what about headers? some technologies send custom headers (e.g. X-Powered-By) with concrete name and even version.
mass assignment: could we add more properties than we are supposed to?
denial of service: does we need rate limiting or some other method of preventing this attack?
injections: SQL injections, command injections
etc.
I'm not a security expert, so my view is rather limited. What I've come in touch with is mostly a problem in one of these areas:
wrong authentication
wrong authorization
mass assignment
overly verbose error messages
I've even seen situations where the whole database of higly sensitive medical data was stolen because an endpoint didn't implement any authentication and authorization, so simply incrementing an id in URL could give you all the resources. So it's wise to pay attention to these basics in the first place, because many of such attacks are very simple and could be done by virtually anyone with a browser.