Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. inna
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    inna

    @inna

    3
    Reputation
    29961
    Posts
    4
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    inna Follow

    Best posts made by inna

    • RE: Positive and negative test cases for a man moving from top of the building to bottom?

      According to me cases somewhat would be like:

      1. Man is using stairs made of concrete to go down (positive)
      2. Man is using Stairs made of wood or rope (positive)
      3. Man is going down and steps are incomplete (negative)
      4. Man is going down via rope and stairs and half way its broken (negative)
      5. Man is using rope rope and wood stairs and Rope is not able to hold the weight of man. (Negative)
      6. Just a floor above ground no stairs are present (Negative)
      7. Man is falling from building (Negative)
      posted in Manual Testing
      inna
      inna
    • RE: Switching from running Java autotests in Jenkins to TFS

      It's not clear which repository you are migrating to TFS from? It is also not clear why to use Microsoft technologies (available in view of TFS) when there are repositories that have proven themselves to be excellent? For example GIT, SVN, etc.

      But if you really want to use Visual Studio Test Professional, then the Microsoft documentation says that there is such a possibility: https://docs.microsoft.com/en-us/vsts/build-release/test/continuous-test-java

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • RE: Create and delete users before and after each API integration test

      I had a similar situation in one of the projects. A separate test database is created before starting the tests and the structure of the prod database is copied into it and filled with fake data. Then it is removed.

      Tests use a mock object to work with a database, where a test database is substituted instead of a prod.

      Sometimes you need to run non-destructive tests on existing records in a prod or stage database; or if it is not convenient to update the test data - you can switch the used database.

      posted in API Testing
      inna
      inna

    Latest posts made by inna

    • Skip terraform resource if it exists

      I'm Getting an error creating a secret because it was created manually.

      Error: error creating Secrets Manager Secret: ResourceExistsException: The operation failed because the secret  already exists.
      

      Is there any way to tell Terraform to skip creating a resource if it already exists ?

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • RE: Why are Release and Build pipeline separated?

      A build process is likely to be linked to code changes, such as when committing a set of changes into source control, or merging changes into a main branch.

      Builds typically take care of anything which should be consistent across all environments, or where there is no meaningful value in repeating per-environment; for example, static code analysis (lint), coding style checks, and anything else which is specific to the code. A build process might also generate some kind of artefact such as a zip/archive, that archive may be published upto an artefact registry somewhere, with version number too.

      Deployment happens per-environment, ideally with minimal variation such as environment-specific configuration parameters and the target resources. For example, it may be useful to deploy into an ephemeral environment for development testing, then some other test environments for QA or integration testing, and finally for at least one production environment (possibly more than one if the strategy involves a passive failover environment, which should obviously be identical to the active production environment)

      The difference boils down to looking at each task performed in the process and asking whether that task should happen just once when the code is committed and/or merged, or whether the task should happen every time the code or artefact is deployed, and applied to each target environment.

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • RE: When OnPrem with Kubernetes, what is the recommended way to do file storage buckets?

      You have many, many options. The following all provide an S3-compatible object storage API:

      • If you are using OpenShift, you can install https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation , which includes the Noobaa object storage service. This provides ObjectBucket and ObjectBucketClaim resources that are analgous to PersistentVolume and PersistentVolumeClaim resources.

      • You could run https://min.io/ , which can be deployed on Kubernetes via https://github.com/minio/minio-operator/blob/master/README.md or a helm chart.

      • You can run your own Ceph cluster, which includes the https://docs.ceph.com/en/quincy/radosgw/index.html .

      • You could run https://docs.openstack.org/swift/latest/ .

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • How can I make host docker images available to k8s Deployment?

      I have a k8s pod that is running docker. My issue is that for the docker images on my host machine to be available within the pod I either have to run the image as a pod or run a docker pull script at the beginning of my deployment.

      Docker's pull API rate limits after some time and these images are not meant to be run without inputs, so I would like to know if there's another way to either expose the host's docker or get access to those images.

      I am currently trying to expose the local docker socket, but that does not appear to be provide access to the Deployment from the local docker instance. I'm testing by running docker images inside the Deployment and on the host to see if the images match which they don't.

      volumeMounts:
          - mountPath: /var/run/docker.sock:ro
            name: docker-sock
      ...
      volumes:
          - name: docker-sock
          hostPath:
              path: /var/run/docker.sock
      

      I'm also mounting with minikube like so

      minikube start --memory='5000MB' --mount --mount-string='/:/k8s' --cpus='4'
      
      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • Does an AWS service automatically assume a needed IAM role?

      Suppose I create a role with AssumeRolePolicyDocument allowing an AWS service (e.g. s3.amazonaws.com) to assume the role - do I need to in some way tell the service to assume the role, or will it automatically assume the role if it needs a permission that the role grants?

      The motivation for this question is S3 Inventory, where according to the docs the S3 principal is what's accessing resources: https://docs.aws.amazon.com/AmazonS3/latest/userguide/configure-inventory.html

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • RE: What is the information returned by docker inspect for .Mounts?

      This is the write or read mode on the container. But you can enter this command for more detail in the return:

          docker inspect --format="{{json .Mounts}}" base-hostSrcNv
      

      You can also use jq (apt install jq) tool to have beautiful json output:

          docker inspect --format="{{json .Mounts}}" base-hostSrcNv | jq
      
      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • How can I set let's encrypt certificates with Ansible?

      I'm trying to get a let's encrypt certificate for my domain with Ansible. I have been reading https://www.digitalocean.com/community/tutorials/how-to-acquire-a-let-s-encrypt-certificate-using-ansible-on-ubuntu-18-04 which is a bit outdated and https://docs.ansible.com/ansible/latest/collections/community/crypto/acme_certificate_module.html#ansible-collections-community-crypto-acme-certificate-module .

      My playbook is a mix of what I have found in the tutorial mentioned and the documentation.

      ---
      
      • name: "Create required directories in /etc/letsencrypt"
        file:
        path: "/etc/letsencrypt/{{ item }}"
        state: directory
        owner: root
        group: root
        mode: u=rwx,g=x,o=x
        with_items:

        • account
        • certs
        • csrs
        • keys
      • name: Generate let's encrypt account key
        openssl_privatekey:
        path: "/etc/letsencrypt/account/account.key"

      • name: Generate let's encrypt private key with the default values (4096 bits, RSA)
        openssl_privatekey:
        path: "/etc/letsencrypt/keys/domain.me.key"

      • name: Generate an OpenSSL Certificate Signing Request
        community.crypto.openssl_csr:
        path: "/etc/letsencrypt/csrs/domain.me.csr"
        privatekey_path: "/etc/letsencrypt/keys/domain.me.key"
        common_name: www.domain.me

      Create challenge

      • name: Create a challenge for domain.me using an account key file.
        acme_certificate:
        acme_directory: "https://acme-v02.api.letsencrypt.org/directory"
        acme_version: 2
        account_key_src: "/etc/letsencrypt/account/account.key"
        account_email: "email@mail.com"
        terms_agreed: yes
        challenge: "http-01"
        src: "/etc/letsencrypt/csrs/domain.me.csr"
        dest: "/etc/letsencrypt/certs/domain.me.crt"
        fullchain_dest: "/etc/letsencrypt/certs/domain.me-fullchain.crt"
        register: acme_challenge_domain_me

      • name: "Create .well-known/acme-challenge directory"
        file:
        path: "project/dir/path/.well-known/acme-challenge"
        state: directory
        owner: root
        group: root
        mode: u=rwx,g=rx,o=rx

      • name: "Implement http-01 challenge files"
        copy:
        content: "{{ acme_challenge_domain_me['challenge_data'][item]['http-01']['resource_value'] }}"
        dest: "project/dir/path/{{ acme_challenge_domain_me['challenge_data'][item]['http-01']['resource'] }}"
        with_items:

        • "domain.me"
        • "www.domain.me"
          when: acme_challenge_domain_me is changed and domain_name|string in acme_challenge_domain_me['challenge_data']
      • name: Let the challenge be validated and retrieve the cert and intermediate certificate
        acme_certificate:
        acme_directory: "https://acme-v02.api.letsencrypt.org/directory"
        acme_version: 2
        account_key_src: "/etc/letsencrypt/account/account.key"
        account_email: "email@mail.com"
        challenge: "http-01"
        src: "/etc/letsencrypt/csrs/domain.me.csr"
        cert: "/etc/letsencrypt/certs/domain.me.crt"
        fullchain: "/etc/letsencrypt/certs/domain.me-fullchain.crt"
        chain: "{/etc/letsencrypt/certs/domain.me-intermediate.crt"
        remaining_days: "60"
        data: "{{ acme_challenge_domain_me }}"
        when: acme_challenge_domain_me is changed

      When I run the playbook, I'm getting this error:

      fatal: [web_server]: FAILED! =>

      {
        "changed": false,
        "msg": "Failed to validate challenge for dns:www.domain.me: Status is \"invalid\". Challenge http-01: Error urn:ietf:params:acme:error:connection: \"xxx.xxx.x.ip: Fetching http://www.domain.me/.well-known/acme-challenge/NRkTQSpAVbWtjFNq206YES55lEoHHinHUn9cjR7vm7k: Connection refused\".",
        "other": {
          "authorization": {
            "challenges": [
              {
                "error": {
                  "detail": "xxx.xxx.x.ip: Fetching http://www.domain.me/.well-known/acme-challenge/NRkTQSpAVbWtjFNq206YES55lEoHHinHUn9cjR7vm7k: Connection refused",
                  "status": 400,
                  "type": "urn:ietf:params:acme:error:connection"
                },
                "status": "invalid",
                "token": "NRkTQSpAVbWtjFNq206YES55lEoHHinHUn9cjR7vm7k", 
                "type": "http-01",
                "url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/103702154687/UdA36w",
                "validated": "2022-04-30T16:01:32Z",
                "validationRecord": [
                  {
                    "addressUsed": "xxx.xxx.x.ip",
                    "addressesResolved": ["xxx.xxx.x.ip"],
                    "hostname": "www.domain.me",
                    "port": "80",
                    "url": "http://www.domain.me/.well-known/acme-challenge/NRkTQSpAVbWtjFNq206YES55lEoHHinHUn9cjR7vm7k"
                  }
                ]
              }
            ],
            "expires": "2022-05-07T15:57:28Z",
            "identifier": {
              "type": "dns",
              "value": "www.domain.me"
            },
            "status": "invalid", 
            "uri": "https://acme-v02.api.letsencrypt.org/acme/authz-v3/103702154687"},
            "identifier": "dns:www.domain.me"
          }
        }
      

      Command UFW status gives:

      To                         Action      From
      --                         ------      ----
      OpenSSH                    ALLOW       Anywhere
      80                         ALLOW       Anywhere
      5432/tcp                   ALLOW       Anywhere
      443/tcp                    ALLOW       Anywhere
      OpenSSH (v6)               ALLOW       Anywhere (v6)
      80 (v6)                    ALLOW       Anywhere (v6)
      5432/tcp (v6)              ALLOW       Anywhere (v6)
      443/tcp (v6)               ALLOW       Anywhere (v6)
      

      The nginx configuration is :

      upstream project {
          server unix:///tmp/project.sock;
      }
      

      server {
      listen 443 ssl;
      server_name www.domain.me;
      ssl_certificate /etc/letsencrypt/certs/domain.me.crt;
      ssl_certificate_key /etc/letsencrypt/keys/domain.me.key;

      listen 80;
      server_name domain.me www.domain.me;
      charset utf-8;
      client_max_body_size 4M;
      return 302 https://$server_name$request_uri;
      
      # Serving static files directly from Nginx without passing through uwsgi
      location /app/static/ {
          alias /home/admin/project/app/static/;
      }
      
      location / {
          # kill cache
          add_header Last-Modified $date_gmt;
          add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
          if_modified_since off;
          expires off;
          etag off;
          uwsgi_pass project;
          include /home/admin/project/uwsgi_params;
      }
      
      #location /404 {
      #   uwsgi_pass project;
      #   include /home/admin/project/uwsgi_params;
      #}
      

      }

      Could you help me understand where the problem is coming from and how to solve it?

      I'm not sure if my mistakes are coming from the playbook, Nginx settings, or somewhere else, so apologize if the question isn't perfectly targeted. It's my first time doing this, so please include details and explanations to help me understand.

      Thank you.

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • RE: Docker vs Virtualisation

      First, Docker is just a company. 😃

      There are two methods of isolating things,

      • Methods that isolate the kernel.
      • Methods that do not isolate the kernel.

      For all intents and purposes, methods that do not isolate the kernel are called "containerization", while those that do are called "virtualization". In industry almost 100% of the use cases of "containerization" refer to Linux Containerization. It's for the most part correct to say that containers are a Linux thing. One more point of confusion, many non-Linux systems that support "native" containerization do so with a virtual machines which means that you have the native kernel (like Darwin/BSD) running on the host, and a Linux kernel running in a virtual machine which hosts just the container environment. As a rule of thumb,

      • Containerization is always less secure: vulnerable to kernel-level exploits.
      • Containerization is always faster: less context switching and hypervisor overhead.

      It's not true that just because something does not virtualize the kernel, that it's not isolated from the host. While it's true Linux containers are just processes, and thus,

      • Visible from the host
      • Subject to any kernel level resource optimizations, like memory deduplication

      Containerized processes must

      • Run in a different namespaces which, barring a kernel-level exploit, isolate them from other process on the machine
      • (Usually) run in isolated cgroups subject to different quotas and limits.

      As a last point, just to drive it home, because

      • Containerization typically refers to the implementation in Linux.
      • Linux has no native concept of containerization itself, only providing cgroups (resource control) and namespaces (isolation)
      • A container is just a native process

      Then we tend to say any process on Linux that makes use of namespaces is running in a container, moreso if it's using cgroups.

      As a final point, typically when you hear "Docker Image" people mean an OCI Compliant Image, which is what everyone uses.


      When you see

      FROM 
      

      In a dockerfile, what you're actually saying is that you want to in git-parlance clone a working set of stuff and build on top of it. This stuff does not include a kernel. But it will include everything else because you will not have access to the system's stuff. (The container is in a different namespace and isolated). For example, a container must include a copy of glibc if needed, and a container of Debian must include apt and other Debian based utilities that constitute a "core" system.

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • terraform apply - how to use wildcard or range for indexes?

      I have 10 virtual machines to be re-created. I want to do it in 2 groups, first 5 machines [0-4] and second 5 machines [5-9]

      terraform apply -replace=module.ci.azurerm_windows_virtual_machine.vm[0-4]
      

      But it does not work, says:

      │ Index key must be followed by a closing bracket.
      

      Is there any way to use a wildcard, or point a group of machines in general in command above?

      posted in Continuous Integration and Delivery (CI
      inna
      inna
    • RE: What happens when two jobs produce artifacts with conflicting locations?

      https://gitlab.com/gitlab-org/gitlab/-/issues/324412

      With the bug, basically the rule of thumb is this,

      A job can rely on an artifact produced by a previous job only so long as no jobs previous to that one produced the same artifact.

      The bug report suggests using https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html to only download specific artifacts.

      posted in Continuous Integration and Delivery (CI
      inna
      inna