Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. magpie
    M
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    magpie

    @magpie

    QA Engineer

    3
    Reputation
    29918
    Posts
    4
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    magpie Follow
    QA Engineer

    Best posts made by magpie

    • RE: Asp.net core authorization check in tests

      In order to check that the page is available under a non-logged in user, it is enough to make sure that the response code will be 2xx:

      [TestFixture]
      public class HomeControllerTest
      {
          public HomeControllerTest ()
          {
              this._factory = new CustomWebApplicationFactory <Startup> ();
          }
      
          private readonly WebApplicationFactory <Startup> _factory;
      
          [Test]
          public async Task IndexPage_ForNonLoggedUser_ReturnsPageContent ()
          {
              // Arrange
              var client = this._factory.CreateClient ();
      
              // Act
              var response = await client.GetAsync ("/");
      
              // Assert
              response.EnsureSuccessStatusCode (); // Status Code 200-299
          }
      }
      

      But checking that the page is available under an authorized user is more complicated: you need to turn off automatic redirection in the client and check that we are redirected to the authorization page.

      [Test]
      public async Task AboutPage_ForNonLoggedUser_RedirectsToLoginPages ()
      {
          // Arrange
          var client = this._factory.CreateClient (
              new WebApplicationFactoryClientOptions
              {
                  AllowAutoRedirect = false
              });
      
          // Act
          var response = await client.GetAsync ("/ Home / About");
      
          // Assert
          Assert.AreEqual (HttpStatusCode.Redirect, response.StatusCode);
          StringAssert.StartsWith ("http: // localhost / Identity / Account / Login", response.Headers.Location.OriginalString);
      }
      

      (I saw examples on the net in which the 403 response was checked, maybe this worked for earlier versions of asp.net core, I just need a redirect check on 2.1)

      Exactly the same tests are required for Razor Page, they are nothing different from similar checks.

      Basically, there is a detailed description in the documentation, and there is also a link to a test application with xUnit.

      posted in Automated Testing
      M
      magpie
    • Correct way to check server response data (pytest)

      Hello! What is the best way to check the values ​​of a dictionary?

      I wrote a test on pytest in which I send a request to the api with a certain set of data and in response I get a dictionary of something like this:

      {'user': '1', 'objects': [{' id ':' 1 ',' event ': [{' type ':' something ',' timestamp ':' 1522991335319 '}]}],' reached ': True}
      

      The value of the keys user, id, type and reached will have to match the values ​​that I sent in the request, the value of the timestamp key is not interesting to me.

      If you check the value directly through assert, then checking objects with deeper nesting, for example type, does not look very good in the code:

      assert response ["objects"] [0] ["event"] [0] ["type"] == "something"
      

      How can you check the values ​​of such a response more efficiently?

      posted in API Testing
      M
      magpie

    Latest posts made by magpie

    • Multistage docker build for Python distroless image

      This is my Dockerfile for distroless image similar to https://github.com/GoogleContainerTools/distroless/blob/main/examples/python3-requirements/Dockerfile

      FROM python:3.9-slim AS build-venv
      RUN python3 -m venv /venv 
      # other installation steps go here
      RUN /venv/bin/pip install --upgrade pip setuptools wheel
      # installing from requirements.txt etc.
      

      Copy the virtualenv into a distroless image

      FROM gcr.io/distroless/python3-debian11
      COPY --from=build-venv /venv /venv
      ENTRYPOINT ["/venv/bin/python3"]

      I'm trying to just get into Python shell (with all the dependencies installed), but docker run -it my-distroless gives me this error

      docker: Error response from daemon: failed to create shim task: 
      OCI runtime create failed: runc create failed: 
      unable to start container process: exec: "/venv/bin/python3": 
      stat /venv/bin/python3: no such file or directory: unknown.
      

      But when replacing base image with debian:11-slim everything works as expected.

      FROM debian:11-slim AS build
      RUN apt-get update && \
          apt-get install --no-install-suggests --no-install-recommends --yes python3-venv gcc libpython3-dev && \
          python3 -m venv /venv
      # the rest is the same
      

      Are there only "compatible" base images for distroless that I should use for my builds or what is the possible reason?

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • Missing some subscriptions in Azure DevOps UI when using automatic service principal

      I am trying to select a subscription I have access to in another tenant from my Azure DevOps UI where I am connected to the Azure tenant AAD as a member with external login and certain permissions/roles.

      In this case I want to select a subscription that I have created a resource group and an app service in so that I can create the deployment pipeline using a pre-configured template in Azure DevOps.

      Previously, my account on Azure DevOps was user1@company.com and the account in Azure portal was user1@company.onmicrosoft.com as it was a different AAD. I have since added user1@company.com to the AAD of the Azure portal where the subscription resides and given it some permissions to access these subscriptions. MFA is set up on both accounts.

      The really frustrating thing about this is that I did get it working temporarily last night and could both select the subscription in AzureDevOps and login when prompted with the user1@company.com account but today it seems to have reverted back to be missing the subscriptions from the additional tenant.

      It is also an issue when I try to set up a new service connection but assuming it depends on same permissions in place.

      Thank you for any help you can provide.

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • RE: How can I access additional services in my container?

      https://learn.microsoft.com/en-us/archive/blogs/waws/things-you-should-know-web-apps-and-linux#you-can-only-expose-one-port-to-the-outside-worldapplies-to-web-app-for-containers . Azure Web App for Containers detect port 3000 first (3000

      Revers proxy in container will resolve your problem. Test outside of container will not work.

      For test you may be change port 3000 to 8082. Backend /services/* will available, but frontend won't.

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • RE: Deployment with manual confirmation of each change

      I see two scenarios here:

      1. Script compares old and new. You could check that output on the stage, e.g. stage:compare. Then you could use when: manual to apply the changes when you are fine with it
      2. You could do step 1 for wach file seperately, so you have a stage for every file like stage: compare_file1 and so on.
      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • Force jenkins job to fail if stage did not run long enough

      I am using Jenkins to build an rpm for some custom software.

      During the build there must be a race condition that only seems to appear when building the RPM, which results in the job finishing successfully, but the software not being built correctly.

      The most obvious way to tell the build is a false positive is to check how quickly the build stage finished.

      During an actually successful run, the build stage will take approximately an hour. When a false positive occurs it will finish in less than 25 minutes.

      The end goal of course is to fix the race condition, but preventing the pipeline from creating a bad RPM and saying the pipeline was successful in the mean time would be a great help.

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • RE: Azure pipelines Docker@2 build command does not pass through build args

      I found the solution by trying various problems, and my issue was that the scope of the ARG command was wrong.

      I had in my Dockerfile:

      FROM base_image_name AS build
      ARG BuildNumber=0.1.0.0
      

      ...

      FROM build AS publish

      ...

      RUN echo Build Number: ${BuildNumber}

      instead, what fixed the issue on the Azure was:

      FROM base_image_name AS build
      

      ...

      FROM build AS publish
      ARG BuildNumber=0.1.0.0

      ...

      RUN echo Build Number: ${BuildNumber}

      I do not know why this built on my local machine, but failed on Azure, but with the scoping fixed it now builds.

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • RE: GitLab "Groups" for permissions only?

      I have followed this pattern in the past for LDAP synchronisation, where a "permission" group was synched with LDAP and then invited into a "projects" group with a specific role.

      This used the https://docs.gitlab.com/ee/administration/auth/ldap/ldap_synchronization.html feature.

      There are a few benefits over adding users directly:

      1. you can manage identities exclusively in the IdP (e.g. Active Directory) - onboarding/offboarding/privilege escalation all done from the IdP.
      2. You make simple declarative statements about which (user) groups need to be in which (project) groups, with given roles.
      3. This makes it easier to think about the authn/authz model and indeed makes it feasible to write declarative infrastructure as code for the configuration of the projects themselves -- which group can merge, which branches are protected, etc.

      On the topic of a "senior review" group which will never produce code - I agree this is a tricky permission to model. You want someone who can approve merge requests and perhaps merge them (thereby effectively making a commit and therefore "producing code" however). However, it's called privilege escalation, when what you want is true role-based authorization. I'm not sure you can do that yet with Gitlab.

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • RE: Arguments in docker_compose.yml throwing error, but not with docker run

      Why in your docker-compose.yml are you setting entrypoint, when your docker run command does not have a --entrypoint option? You've also removed the -d option from the invocation, and you're passing the remaining options as individual arguments rather than the single string you're using in the docker run invocation.

      The equivalent to your docker run command line would be:

        postgres-2:
          image: livingdocs/postgres:14.4
          container_name: postgres-SLAVE
          ports:
            - "5434:5432"
          command:
            - standby
            - -d
            - "host=host.docker.internal port=5433 user=postgres target_session_attrs=read-write"
      

      I've removed the volumes stanza, since you're not using any volumes on the docker run command line.

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • GitLab Container Registry: errors: denied: requested access to the resource is denied [...] error parsing HTTP 401 response body

      When I run podman push (the equiv of docker push) to get my image up to GitLab's container registry, I'm getting the following error.

      errors: denied: requested access to the resource is denied [...] error parsing HTTP 401 response body: unexpected end of JSON input: ""

      You can see it all here,

      ❯ podman push localhost/acme-web-release registry.gitlab.com/evancarroll/acme-backend
      Getting image source signatures
      Error: trying to reuse blob sha256:1e05dc5a6784f6e7375fe1f73dc2b02f7d184bc53a0150daf06062dcbfde02d4 at destination: checking whether a blob sha256:1e05dc5a6784f6e7375fe1f73dc2b02f7d184bc53a0150daf06062dcbfde02d4 exists in registry.gitlab.com/evancarroll/acme-backend: errors:
      denied: requested access to the resource is denied
      error parsing HTTP 401 response body: unexpected end of JSON input: ""
      

      I've confirmed that I've podman login (the podman analog of docker login) with a Personal Access Token that grants write_registry).

      posted in Continuous Integration and Delivery (CI
      M
      magpie
    • RE: How to assign an ACL for each S3 Bucket in a tuple/list using Terraform?

      I just found a solution, I've created "count" for every resource:

      resource "aws_s3_bucket" "this" {
          count = length(var.s3_bucket_names)
          bucket = var.s3_bucket_names[count.index]
          tags = var.tags
      }
      

      resource "aws_s3_bucket_acl" "this_acl" {
      count = length(aws_s3_bucket.this)
      bucket = aws_s3_bucket.this[count.index].id
      acl = var.acl
      }

      posted in Continuous Integration and Delivery (CI
      M
      magpie