Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. kalena
    K
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    kalena

    @kalena

    1
    Reputation
    29954
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    kalena Follow

    Best posts made by kalena

    • How do I create a loop within my selenium python script to select every option шт dropdown menu

      Im new to python, or coding for that matter... Currently I have a python script to select one option within the dropdown menu, but I would like my script to repeat each time and select the next option. Theres about 50 different options within the dropdown.

      l1  = "Hong Kong, China (Chrome, Canary, Firefox)"
      
      urlTextBox          = "url"
      dropdownOption      = "location"
      submitBtn           = ".//*[@kalena homeBtn             = ".//*[@id='nav']/li[1]/a"
      
      urlTextBoxElement = WebDriverWait(driver, 10).\
          until(lambda driver: driver.find_element_by_id(urlTextBox))
      
      dropdownOptionElement = WebDriverWait(driver, 10).\
          until(lambda driver: driver.find_element_by_id(dropdownOption))
      
      submitBtnElement = WebDriverWait(driver, 10).\
          until(lambda driver: driver.find_element_by_xpath(submitBtn))
      
      urlTextBoxElement.send_keys(webTeamPage)
      Select(dropdownOptionElement).select_by_visible_text(l1)
      submitBtnElement.click()
      time.sleep(3)
      homeBtnElement = WebDriverWait(driver, 10).\
          until(lambda driver: driver.find_element_by_xpath(homeBtn))
      
      posted in Automated Testing
      K
      kalena

    Latest posts made by kalena

    • RE: How to override global "environment {}" Jenkins Variables in a stage?

      See the https://www.jenkins.io/doc/book/pipeline/jenkinsfile/#setting-environment-variables-dynamically . I think this will answer your question. Here is the snippet of example code from those docs:

      pipeline {
          agent any 
          environment {
              // Using returnStdout
              CC = """${sh(
                      returnStdout: true,
                      script: 'echo "clang"'
                  )}""" 
              // Using returnStatus
              EXIT_STATUS = """${sh(
                      returnStatus: true,
                      script: 'exit 1'
                  )}"""
          }
          stages {
              stage('Example') {
                  environment {
                      DEBUG_FLAGS = '-g'
                  }
                  steps {
                      sh 'printenv'
                  }
              }
          }
      }
      

      So your code might end up looking something like this:

      pipeline {
          agent any 
          environment {
              mongo_url = """${
                  switch(env.BRANCH_NAME) {
                      case 'bugfix/*'
                      case 'feature/*'
                      case 'development'
                      case 'hotfix/*'
                          'YOUR_DEV_MONGO_URL_HERE'
                          break
                      case 'staging'
                          'YOUR_STAGING_MONGO_URL_HERE'
                          break
                  }"""
          }
      }
      
      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • RE: How to access elements of a variable in ansible

      For example,

      result: "{{ service.control.0.ipa }},{{ management.control.0.ipa }}"
      

      gives

      result: 192.168.101.81,10.100.78.81
      
      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • CoreDNS is not working after installation of microk8s

      I have installed a microk8s on my 64 bit raspbian, and enabled a dns server by

      microk8s enable dns
      

      Coredns is continously cashes:

      microk8s.kubectl describe pod coredns-64c6478b6c-snkxx --namespace=kube-system
      

      Namespace: kube-system
      Priority: 2000000000
      Priority Class Name: system-cluster-critical
      Node: raspberrypi4-docker3/192.168.0.129
      Start Time: Fri, 12 Aug 2022 18:28:54 +0200
      Labels: k8s-app=kube-dns
      pod-template-hash=64c6478b6c
      Annotations: cni.projectcalico.org/podIP: 10.1.174.245/32
      cni.projectcalico.org/podIPs: 10.1.174.245/32
      priorityClassName: system-cluster-critical
      Status: Running
      IP: 10.1.174.245
      IPs:
      IP: 10.1.174.245
      Controlled By: ReplicaSet/coredns-64c6478b6c
      Containers:
      coredns:
      Container ID: containerd://194f8be3d70a7141148c63d2dd92c24e47e40af6c0fdb6d582885a60d38d8e62
      Image: coredns/coredns:1.8.0
      Image ID: docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
      Ports: 53/UDP, 53/TCP, 9153/TCP
      Host Ports: 0/UDP, 0/TCP, 0/TCP
      Args:
      -conf
      /etc/coredns/Corefile
      State: Waiting
      Reason: CrashLoopBackOff
      Last State: Terminated
      Reason: StartError
      Message: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: openat2 /sys/fs/cgroup/kubepods/burstable/podfb1a1616-e10f-4916-9df8-160a92cf4340/194f8be3d70a7141148c63d2dd92c24e47e40af6c0fdb6d582885a60d38d8e62/memory.max: no such file or directory: unknown
      Exit Code: 128
      Started: Thu, 01 Jan 1970 01:00:00 +0100
      Finished: Fri, 12 Aug 2022 18:32:26 +0200
      Ready: False
      Restart Count: 6
      Limits:
      memory: 170Mi
      Requests:
      cpu: 100m
      memory: 70Mi
      Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
      Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
      Environment:
      Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hs9ph (ro)
      Conditions:
      Type Status
      Initialized True
      Ready False
      ContainersReady False
      PodScheduled True
      Volumes:
      config-volume:
      Type: ConfigMap (a volume populated by a ConfigMap)
      Name: coredns
      Optional: false
      kube-api-access-hs9ph:
      Type: Projected (a volume that contains injected data from multiple sources)
      TokenExpirationSeconds: 3607
      ConfigMapName: kube-root-ca.crt
      ConfigMapOptional:
      DownwardAPI: true
      QoS Class: Burstable
      Node-Selectors:
      Tolerations: CriticalAddonsOnly op=Exists
      node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
      node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
      Events:
      Type Reason Age From Message


      Normal Scheduled 4m42s default-scheduler Successfully assigned kube-system/coredns-64c6478b6c-snkxx to raspberrypi4-docker3
      Normal Pulled 4m41s (x2 over 4m42s) kubelet Container image "coredns/coredns:1.8.0" already present on machine
      Normal Created 4m41s (x2 over 4m42s) kubelet Created container coredns
      Warning Failed 4m41s (x2 over 4m42s) kubelet Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: openat2 /sys/fs/cgroup/kubepods/burstable/podfb1a1616-e10f-4916-9df8-160a92cf4340/coredns/memory.max: no such file or directory: unknown
      Warning BackOff 4m39s (x2 over 4m40s) kubelet Back-off restarting failed container
      Normal Pulled 2m32s (x4 over 3m51s) kubelet Container image "coredns/coredns:1.8.0" already present on machine
      Normal Created 2m32s (x4 over 3m51s) kubelet Created container coredns
      Warning Failed 2m32s (x4 over 3m51s) kubelet Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: openat2 /sys/fs/cgroup/kubepods/burstable/podfb1a1616-e10f-4916-9df8-160a92cf4340/coredns/memory.max: no such file or directory: unknown
      Warning BackOff 2m30s (x13 over 3m51s) kubelet Back-off restarting failed container

      How can I solve this CrashLoopBackOff problem?

      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • RE: Continuous deployment question

      Your question is a little bit confusing and led to some interpretations.

      First, for Jenkins, you should always run your pipelines using Jenkins agents, it is a good practice.

      The deploy stage is to - well, as the name -- deploy to somewhere. The way you are going to deploy totally depends on what you are deploying and where, you can use an API, a CLI, SSH, or even an SFTP. You can also store your artifact in a artifact server (Nexus, Docker Hub, Artifactory) and then tell your server to pull the image from there.

      Wrapping up: In the deploy stage, put the code to deploy the artifact (.exe, .war, .zip...) anywhere you want. The deploy stage will run inside an agent/runner, this runner will connect to the destination system through CLI, API... to deploy the artifact.

      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • How to properly escape brackets in ArgoCD?

      I have an ArgoCD + Argo Workfows setup. In my manifests files, I am trying to escape double brackets since I have some helm apps, and if I manually edit the Argo workflow file with the expression below, it works. BUT if I push any changes my app gets degraded, with the following error: cannot validate Workflow: templates.main.steps failed to resolve {{steps.scheduler.outputs.result}} and this is what my config looks like:

          templates:
            - name: main
              steps:
              - - name: scheduler
                  template: scheduler
                - name: step-1
                  templateRef:
                    name: step-1
                    template: my-templates
                  when: '"{{`{{steps.scheduler.outputs.result}}`}}" =~ example'
      

      Any ideas? I think this might be Argocd related, since the error only appears there and I can manually run the workflow from Argo.

      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • RE: Docker Push Container to Azure ACR "unauthorized: authentication required"

      spottedmahn had the answer for me on this thread (though it's far from the top answer, while other answers are much more particular to using Azure Devops UI): https://stackoverflow.com/questions/55495223/push-docker-image-task-to-acr-fails-in-azure-unauthorized-authentication-requi

      The image name needs to be in all lowercase. You can't just change the push command to lowercase.

      $ docker build -t arcticacr.azurecr.io/sftp01/sftptest:0.02 -f Dockerfile .
      $ az login
      $ az acr login --name arcticacr
      $ docker push arcticacr.azurecr.io/sftp01/sftptest:0.02
      
      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • RE: Why does the Rancher Security Group use TCP Port 10256?

      According to https://github.com/rancher/rke/issues/212 it is used for kubeproxy.

      The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends. Service cluster IPs and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.

      References
      https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/

      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • Terraform: Why is null_resource's remote-exec not connecting to aws_instance via SSH?

      I've been going through answers to similar questions on this StackExchange and on StackOverflow and reading through documentations all day ...

      Here's the .tf file I'm executing:

      # defines the AWS provider
      provider "aws" {
        # credentials path: ~/.aws/credentials
        profile = var.aws_profile
      }
      resource "aws_key_pair" "key_pair" {
        # variable's default value: "id_rsa"
        key_name   = var.aws_key_pair_name
        # variable's default value: public key of "id_rsa"
        public_key = var.aws_key_pair_public
      }
      resource "aws_security_group" "security_group" {
        ingress {
          from_port = 22
          to_port = 22
          protocol = "tcp"
          cidr_blocks = ["0.0.0.0/0"]
        }
        egress {
          from_port = 0
          to_port = 0
          protocol = "-1"
          cidr_blocks = ["0.0.0.0/0"]
        }
        tags = {
          # variable's default value: "security-group-1"
          Name = var.aws_security_group_tags_name
        }
      }
      resource "aws_instance" "instance" {
        # variable's default value: ID of the Ubuntu AMI
        ami = var.aws_instance_ami
        # variable's default value: "t2.micro"
        instance_type = var.aws_instance_type
        associate_public_ip_address  = true
        key_name = aws_key_pair.key_pair.key_name
        vpc_security_group_ids = [aws_security_group.security_group.id]
        tags = {
          # variable's default value: "ec2-instance-1"
          Name = var.aws_instance_tags_name
        }
      }
      resource "null_resource" "instance" {
        provisioner "remote-exec" {
          connection  {
            type = "ssh"
            host = aws_instance.instance.public_ip
            # variable's default value: "ubuntu", Ubuntu AMI's default system user account
            user = var.aws_instance_user_name
            # variable's default value: "~/.ssh/id_rsa"
            # the path to the public key provided to aws_key_pair.key_pair
            private_key = file(var.aws_key_pair_private_path)
            timeout = "20s"
          }
          inline = ["echo 'remote-exec message'"]
        }
        provisioner "local-exec" {
          command = "echo 'local-exec message'"
        }
      }
      

      I tried executing it with the permissions of the private key's file set to 400 and 600. It's returning the following error in both cases:

      aws_instance.instance (remote-exec): Connecting to remote host via SSH...
      aws_instance.instance (remote-exec):   Host: 54.82.23.158
      aws_instance.instance (remote-exec):   User: ubuntu
      aws_instance.instance (remote-exec):   Password: false
      aws_instance.instance (remote-exec):   Private key: true
      aws_instance.instance (remote-exec):   Certificate: false
      aws_instance.instance (remote-exec):   SSH Agent: true
      aws_instance.instance (remote-exec):   Checking Host Key: false
      aws_instance.instance (remote-exec):   Target Platform: unix
      aws_instance.instance: Still creating... [1m0s elapsed]
      ╷
      │ Error: remote-exec provisioner error
      │ 
      │   with aws_instance.instance,
      │   on main.tf line 63, in resource "aws_instance" "instance":
      │   63:   provisioner "remote-exec" {
      │ 
      │ timeout - last error: SSH authentication failed (ubuntu@54.82.23.158:22): ssh: handshake failed: ssh: unable to
      │ authenticate, attempted methods [none publickey], no supported methods remain
      

      This is despite of the fact that the following command connects to the EC2 instance successfully:

      ubuntu:~/projects/course-1/project-1$ ssh -i "id_rsa" ubuntu@ec2-54-163-199-195.compute-1.amazonaws.com
      

      What am I missing? Is there a better apprach?

      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • RE: Make a readiness probe to fail when there is a newer version of the app being rolled out

      If your pods start fast, perhaps use https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment , that will make sure to terminate "old" pods and "recreate" new ones.

      All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.

      posted in Continuous Integration and Delivery (CI
      K
      kalena
    • RE: Create AWS SG and use it

      You may not create those two things in the right order, since there is no dependency declared between them. Terraform doesn't know that the security group with name "my-sg" is the same as the security group that it is creating with that name.

      You could use a reference to the sg resource in your instance declaration:

      resource "aws_security_group" "my_sg" {
        vpc_id = aws_vpc.mainvpc.id
        name = "my_sg"
        ingress {
          cidr_blocks = [
            "0.0.0.0/0"
          ]
          from_port = 22
          to_port = 22
          protocol = "tcp"
        }
      

      }

      resource "aws_instance" "my_new_instance" {
      ami = "AMI-ID"
      instance_type = "t2.micro"
      security_groups = [aws_security_group.my_sg.name]
      }
      }

      or you can https://www.terraform.io/language/meta-arguments/depends_on :

      resource "aws_security_group" "my_sg" {
        vpc_id = aws_vpc.mainvpc.id
        name = "my_sg"
        ingress {
          cidr_blocks = [
            "0.0.0.0/0"
          ]
          from_port = 22
          to_port = 22
          protocol = "tcp"
        }
      

      }

      resource "aws_instance" "my_new_instance" {
      ami = "AMI-ID"
      instance_type = "t2.micro"
      security_groups = ["my-sg"]
      }
      depends_on = [aws_security_group.my_sg,]
      }

      If you make the reference to another resource (example 1), then Terraform can itself determine the dependency, and wait for the creation of the security group before creating the instance.

      posted in Continuous Integration and Delivery (CI
      K
      kalena