Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. guilherme
    G
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    guilherme

    @guilherme

    2
    Reputation
    30026
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    guilherme Follow

    Best posts made by guilherme

    • RE: What is the difference between test strategy and test plan?
      # Test Plan Test Strategy
      1 A test plan is derived from Software Requirement Specification (SRS), describing in detail the scope of testing and the different activities performed in testing. A test strategy is a high-level document describing the way testing is canted out.
      2 A test plan is project level. A test stategy organization level
      3 It describes the whole testing activities in detail - the techniques used, schedule, resources etc. It describes the high-level test design techniques to be used, environment specifications etc.
      4 It is prepared by test lead or test manager. It is generally prepared by the project manager.
      5 Components:The major components of Test Plan include - Test Plan ID, test environment, features to be tested, entry/exit criteria, status, type of testing, brief introduction etc The major components of Test Strategy include - Scope, Objective, business issues, risks, testing approach, testing deliverables, defect tracking, training, automation etc.
      6 A Test Plan usually exists individually. Test strategy is divided into multiple test plans that are taken care further independently.
      posted in Automated Testing
      G
      guilherme
    • RE: [X-Ray for Jira][Configuration error]: "view-issue-section.error.custom-fields-not-configured.title"

      Xray presents that error message when its needed Jira custom fields aren't setup.

      The vendor has a knowledge base article that addresses this: https://confluence.xpand-it.com/display/ProductKB/%5BXray+Server%5D+Xray+Custom+Fields+Not+Configured

      Hope this helps!

      posted in Automated Testing
      G
      guilherme

    Latest posts made by guilherme

    • Validating kubernetes manifest with --dry-run and generateName

      We're using ArgoCD to manage deployments, and I'm in the process of sorting out the config repository. I'm planning on having pull requests for any change to the config, and I want to validate the configuration to ensure it's not breaking anything. I've done some looking around, and it looks like the main options are kubeval, kubeconform or using --dry-run with kubectl.

      Due to kubectl actually connecting to the cluster, and have the cluster perform the validation I prefer this approach as it should catch every possible error, however I'm running into a problem.

      One of the resources uses generateName which is not compatible with kubectl apply, so if I try and validate using kubectl apply -f manifest.yaml --dry-run=server I get the error cannot use generate name with apply. To get around this, I tried to use kubectl create -f manifest.yaml --dry-run=server but instead I get a load of errors about resources already existing (understandable).

      So how can I do this? I can't use apply, and I can't use create. Are there any other options? Does anyone know what Argo uses to validate, because if I push something invalid it presents an error before it is told to sync.

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • Does `helm upgrade` use rolling restarts for `deployments`, yes/no? if not then what is the default?

      I ask because:

      • I want to know what is the default helm upgrade behavior
      • I might need to change the default helm upgrade behavior

      Does helm upgrade use rolling restarts for deployments? If not then what is the default?

      If helm upgrade is not the thing that controls the behavior for deployments, please say what does. (I suspect the deployment controls the behavior of what happens during a helm upgrade but I am not sure so I am asking.)

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • RE: Docker port mapping across several IPs on same NIC results in error

      There are several ways to address this particular problem.

      Using a dynamic proxy

      Using a dynamic proxy is nice because you only need a single ip address and port. Incoming requests are directed to the appropriate container based on metadata about the request (such as the hostname, or particular headers, or the request path, etc).

      This is actually what I'm doing at home: I have a number of web services running on my local system and I want to access them by name. I'm handling this by using https://traefik.io/ , a dynamic proxy designed to work well with systems like Docker. Configuration is primarily via Docker labels.

      So for example, if I start the Traefik container like this:

      docker run -d --name traefik \
          -v /var/run/docker.sock:/var/run/docker.sock \
          -p 80:80 \
          -p 127.0.0.1:8080:8080 \
          docker.io/traefik:v2.8 \
          --api.insecure=true \
          --providers.docker
      

      (That -p 127.0.0.1:8080:8080 is exposing the Traefik dashboard on http://localhost:8080.)

      Then I can make a web service available at grafana.local by starting it like this:

      docker run -d --name grafana \
          -l 'traefik.http.routers.grafana.rule=Host(`grafana.local`)' \
        docker.io/nginx:mainline
      

      Now any host on my network can access that service at http://grafana.local (assuming I have arranged for the hostname "grafana.local" to map to the address of the host running the traefik proxy).

      I've used docker run commands here, but you can obviously set this all up using docker-compose instead.

      Using multiple ip addresses

      It sounds a bit like docker can not share the same port even when it belongs to several IPs on the host.

      That is in fact not true. Consider this; on my host I have the following addresses assigned on eth0:

      $ ip -4 addr show eth0
      2: eth0:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
          altname eno2
          altname enp0s31f6
          inet 192.168.1.175/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
             valid_lft 83583sec preferred_lft 83583sec
          inet 192.168.1.180/24 scope global secondary eth0
             valid_lft forever preferred_lft forever
          inet 192.168.1.181/24 scope global secondary eth0
             valid_lft forever preferred_lft forever
          inet 192.168.1.182/24 scope global secondary eth0
             valid_lft forever preferred_lft forever
      

      Let's start up a few containers:

      docker run -d --name darkhttpd  -p 192.168.1.180:80:8080 docker.io/alpinelinux/darkhttpd
      docker run -d --name nginx -p 192.168.1.181:80:80 docker.io/nginx:mainline
      docker run -d --name httpd -p 192.168.1.182:80:80 docker.io/httpd:2.4
      

      We can see that the containers are all running successfully:

      $ docker ps
      CONTAINER ID   IMAGE                   COMMAND                  CREATED          STATUS          PORTS                        NAMES
      283c8e061ebc   httpd:2.4               "httpd-foreground"       16 seconds ago   Up 15 seconds   192.168.1.182:80->80/tcp     httpd
      0d2960c07212   nginx:mainline          "/docker-entrypoint.…"   18 seconds ago   Up 18 seconds   192.168.1.181:80->80/tcp     nginx
      68e56ed8c180   alpinelinux/darkhttpd   "darkhttpd /var/www/…"   18 seconds ago   Up 18 seconds   192.168.1.180:80->8080/tcp   darkhttpd
      

      And I get different services depending on which IP address I use:

      $ curl 192.168.1.180
      
      
       /
      
      
      ...
      $ curl 192.168.1.181
      
      
      
      Welcome to nginx!
      
      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • RE: Publish python package into private repository behind VPN

      So I found similar question (regarding installing, but works for publishing as well) https://stackoverflow.com/questions/73624685/install-python-package-from-private-artifactory-behind-vpn

      Basically, you can either setup GH Actions runner within VPN or use proxy to access your resorces.

      I have ended up using answer from comments

      my strategy would be to create a source code release zip file and publish it on GitHub and then download it from within the VPN to push it to the intern repository. github.com/marketplace/actions/create-github-release

      and still run separate job for publishing into internal repository.

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • RE: How to run a task from a playbook to a specific host

      This is a duplicate of this StackOverflow https://stackoverflow.com/questions/31912748/how-to-run-a-particular-task-on-specific-host-in-ansible .

      But I will combine and summarize all the answers here.

      • If your "other_host" is in the playbooks inventory then you can use the https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html keyword: when: inventory_hostname is "other_host". Which will only run that task once and only for "other_host".

      • If your playbook inventory does not include "other_host" then you can use the https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html keyword. delegate_to: other_host_ip. Not that you have to use the IP or DNS name of the machine unless you use https://docs.ansible.com/ansible/latest/collections/ansible/builtin/add_host_module.html#add-host-module module. This will run the task for EVERY host in the playbooks inventory but will run the tasks on other_host.

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • RE: How can I make host docker images available to k8s Deployment?

      As the https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon describes, you can reuse the Docker daemon from Minikube with eval $(minikube docker-env).

      So to use an image without uploading it, you can follow these steps:

      1. Set the environment variables with eval $(minikube docker-env)
      2. Build the image with the Docker daemon of Minikube (eg docker build -t my-image .)
      3. Set the image in the pod spec like the build tag (eg my-image)
      4. Set the https://kubernetes.io/docs/concepts/containers/images/#updating-images to Never, otherwise Kubernetes will try to download the image.

      Important note: You have to run eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session.

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • How can I get an installation of k3s working with KubeApps?

      How can I get an install of KubeApps running on K3s? What are the minimal required steps?

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • RE: k3s: The connection to the server localhost:8080 was refused - did you specify the right host or port?

      This problem is likely caused by a bad ~/.kube/config perhaps you have a file from a different kubernetes install (minikube) or an older k3s. If the server is local you can fix this by running these commands,

      mkdir ~/.kube
      sudo k3s kubectl config view --raw | tee ~/.kube/config
      chmod 600 ~/.kube/config
      

      The contents of ~/.kube/config need to have the same information as /etc/rancher/k3s/k3s.yaml when the server was started (the keys, ip, and ports).


      Note in order to tell k3s to use this config file https://devops.stackexchange.com/a/16044/18965 .

      export KUBECONFIG=~/.kube/config
      

      You should persist this by having it set in ~/.profile or ~/.bashrc

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • Unable to connect from my local system to ec2 instance created by terraform script

      Following is the source code:

      variable "ec2_instance_type_name" {
          type    = string
          default = "t2.nano"
      }
      

      terraform {
      required_providers {
      aws = {
      source = "hashicorp/aws"
      version = "~> 3.27"
      }
      }
      }

      provider "aws" {
      alias = "us"
      region = "us-east-1"
      }

      provider "aws" {
      alias = "eu"
      region = "eu-west-1"
      }

      data "aws_ami" "amazon_2" {
      provider = aws.eu
      most_recent = true

      filter { 
          name = "name"
          values = ["amzn2-ami-kernel-*-hvm-*-x86_64-gp2"]
      } 
      owners = ["amazon"]
      

      }

      data "http" "myip" {
      url = "http://ipv4.icanhazip.com"
      }

      resource "aws_vpc" "docdb_peer" {
      provider = aws.eu
      cidr_block = "172.32.0.0/16"
      enable_dns_support = true
      enable_dns_hostnames = true
      }

      resource "aws_internet_gateway" "gw_connect" {
      provider = aws.eu
      vpc_id = aws_vpc.docdb_peer.id
      }

      resource "aws_security_group" "vpc_sg" {
      provider = aws.eu
      vpc_id = aws_vpc.docdb_peer.id
      name = "vpc-connect"
      description = "VPC Connect"

      ingress {
          cidr_blocks = ["${chomp(data.http.myip.body)}/32"]
          from_port   = 22
          to_port     = 22
          protocol    = "tcp"
      } 
      
      egress {
          from_port   = 0
          to_port     = 0
          protocol    = "-1"
          cidr_blocks = ["0.0.0.0/0"]
      }
      

      }

      resource "aws_subnet" "main" {
      provider = aws.eu
      vpc_id = aws_vpc.docdb_peer.id
      availability_zone = "eu-west-1a"
      cidr_block = "172.32.0.0/20"
      map_public_ip_on_launch = true
      }

      resource "aws_instance" "tunnel-ec2" {
      provider = aws.eu
      vpc_security_group_ids = ["${aws_security_group.vpc_sg.id}"]
      subnet_id = aws_subnet.main.id
      ami = data.aws_ami.amazon_2.id
      instance_type = var.ec2_instance_type_name
      key_name = "ireland_ofc_new"
      depends_on = [aws_internet_gateway.gw_connect]
      }

      I try to ssh into the system using the key pair pem file and it just timeout. My other ec2 instance which I manually created works just fine. Please help resolve the issue.

      posted in Continuous Integration and Delivery (CI
      G
      guilherme
    • validating map of objects

      I am trying to validate a map of objects that is typed as below :

      variable "ecs_config_map" {
        type = map(object({
          cpu     = number
          memory  = number
          desired = number
      
      capabilities = list(string)
      launch_type  = string
      

      }))
      }

      The value for the variable looks like :

      ecs_config_map = {
        driver = {
          cpu     = 256
          memory  = 512
          desired = 0
      
      capabilities = ["FARGATE"]
      launch_type  = "FARGATE"
      

      }
      aggregator = {
      cpu = 256
      memory = 512
      desired = 0

      capabilities = ["FARGATE"]
      launch_type  = "FARGATE"
      

      }
      }


      Now, I want to perform some basic validation, but I cannot seem to get the syntax right.

       validation {
          condition = contains(["EC2", "FARGATE", "EXTERNAL"], var.ecs_config_map[*]["launch_type"])
          error_message = "Only EC2, FARGATE, EXTERNAL are allowed values for launch_type."
        }
      

      This threw Invalid value for "list" parameter: list of bool required.


      Debugging on terraform console described :

      > type(var.ecs_config_map)
      map(
          object({
              capabilities: list(string),
              cpu: number,
              desired: number,
              launch_type: string,
              memory: number,
          }),
      )
      > type(var.ecs_config_map["driver"])
      object({
          capabilities: list(string),
          cpu: number,
          desired: number,
          launch_type: string,
          memory: number,
      })
      > type(var.ecs_config_map[*])
      tuple([
          map(
              object({
                  capabilities: list(string),
                  cpu: number,
                  desired: number,
                  launch_type: string,
                  memory: number,
              }),
          ),
      ])
      

      which indicated that my problem was because I was trying to iterate over all the objects in the variable. I am using a splat expression when doing var.ecs_config_map[*] which converts the whole map into a tuple of maps of objects.


      Then I tried to use a for expression to perform the validation

       validation {
          condition = can(for task in var.ecs_config_map : contains(["EC2", "FARGATE", "EXTERNAL"], task["launch_type"]))
          error_message = "Only EC2, FARGATE, EXTERNAL are allowed values for launch_type."
        }
      

      And I got thrown The condition for variable "ecs_config_map" can only refer to the variable itself, using var.ecs_config_map.

      for_each did not work either.


      Am I messing up the syntax of the validation somewhere? Or am I asking too much from terraform by attempting validation of a complex type?

      posted in Continuous Integration and Delivery (CI
      G
      guilherme