Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. emmalee
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    emmalee

    @emmalee

    0
    Reputation
    29938
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    emmalee Follow

    Best posts made by emmalee

    This user hasn't posted anything yet.

    Latest posts made by emmalee

    • forward http request to pod running on worker node

      Context: I am trying to setup a kubernetes cluster on my PC using virtual box. Here's the setup - In this setup i am able to launch pods from the control plane, as well as able to send http requests to pods enter image description here

      here CP01: master/control plane, W01 - worker1, W02 - worker2 node.

      I had initialized control plane using -

      master] kubeadm init --apiserver-advertise-address 10.5.5.1 --pod-network-cidr 10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock
      worker] kubeadm join 10.5.5.1:6443  --token jv5pxe.t07snw8ewrbejn6i   --cri-socket  unix:///var/run/cri-dockerd.sock      --discovery-token-ca-cert-hash sha256:10fc6e3fdc2085085f1ea1a75c9eb4316f13759b0d3773377db86baa30d8b972
      

      I am able to create deployment as -

      [root@cp01 ~]# cat run.yaml
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
        labels:
          app: nginx
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx:1.14.2
              ports:
              - containerPort: 80
      

      Here's the load balancer -

      [root@cp01 ~]# cat serv.yaml
      apiVersion: v1
      kind: Service
      metadata:
        name: hello-world
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        ports:
          - name: http
            protocol: TCP
            port: 80
            targetPort: 80
      

      and from the cp01 node , i am able to hit both the pods via load balancer

      [root@cp01 ~]# kubectl get services
      NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
      hello-world   LoadBalancer   10.96.81.128        80:31785/TCP   3m56s
      [root@cp01 ~]# curl 10.96.81.128|grep Welcome
         Welcome to nginx!
         

      Welcome to nginx!

      I have following 2 Queries -

      Q1: What settings to i need to carry out to make the flannel work without enp0s3 interface? I don't need external connectivity on w01 and w02. So if i disable the enp0s3 on w01 and w02, the flannel pods on worker starts failing. Here's is how i tried reproducing issue on w01 -

      [root@cp01 ~]# kubectl get pods -A
      NAMESPACE      NAME                           READY   STATUS    RESTARTS        AGE
      kube-flannel   kube-flannel-ds-mp8zs          1/1     Running   8 (31m ago)     41m
      kube-flannel   kube-flannel-ds-p5kwj          1/1     Running   2 (12h ago)     3d1h
      kube-flannel   kube-flannel-ds-wqpwl          1/1     Running   0               24m
      kube-system    coredns-565d847f94-xddkq       1/1     Running   1 (12h ago)     15h
      kube-system    coredns-565d847f94-xl7pj       1/1     Running   1 (12h ago)     15h
      kube-system    etcd-cp01                      1/1     Running   2 (12h ago)     3d1h
      kube-system    kube-apiserver-cp01            1/1     Running   2 (12h ago)     3d1h
      kube-system    kube-controller-manager-cp01   1/1     Running   2 (12h ago)     3d1h
      kube-system    kube-proxy-9f4xm               1/1     Running   2 (12h ago)     3d1h
      kube-system    kube-proxy-dhhqc               1/1     Running   2 (12h ago)     3d1h
      kube-system    kube-proxy-w64gc               1/1     Running   1 (2d16h ago)   3d1h
      kube-system    kube-scheduler-cp01            1/1     Running   2 (12h ago)     3d1h
      [root@cp01 ~]# ssh w01 'nmcli con down enp0s3'
      [root@cp01 ~]# kubectl delete pod -n kube-flannel kube-flannel-ds-mp8zs
      pod "kube-flannel-ds-mp8zs" deleted
      [root@cp01 ~]# kubectl delete pod -n kube-flannel kube-flannel-ds-wqpwl
      pod "kube-flannel-ds-wqpwl" deleted
      [root@cp01 ~]# kubectl get pods -A
      NAMESPACE      NAME                           READY   STATUS             RESTARTS        AGE
      kube-flannel   kube-flannel-ds-2kqq5          0/1     CrashLoopBackOff   2 (25s ago)     45s
      kube-flannel   kube-flannel-ds-kcwk6          1/1     Running            0               49s
      kube-flannel   kube-flannel-ds-p5kwj          1/1     Running            2 (12h ago)     3d1h
      kube-system    coredns-565d847f94-xddkq       1/1     Running            1 (12h ago)     15h
      kube-system    coredns-565d847f94-xl7pj       1/1     Running            1 (12h ago)     15h
      kube-system    etcd-cp01                      1/1     Running            2 (12h ago)     3d1h
      kube-system    kube-apiserver-cp01            1/1     Running            2 (12h ago)     3d1h
      kube-system    kube-controller-manager-cp01   1/1     Running            2 (12h ago)     3d1h
      kube-system    kube-proxy-9f4xm               1/1     Running            2 (12h ago)     3d1h
      kube-system    kube-proxy-dhhqc               1/1     Running            2 (12h ago)     3d1h
      kube-system    kube-proxy-w64gc               1/1     Running            1 (2d16h ago)   3d1h
      kube-system    kube-scheduler-cp01            1/1     Running            2 (12h ago)     3d1h
      

      here's the reason -

      [root@cp01 ~]# kubectl logs -n kube-flannel kube-flannel-ds-2kqq5
      Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
      I1005 08:17:59.211331       1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
      W1005 08:17:59.211537       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
      E1005 08:17:59.213916       1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-2kqq5': Get "https://10.96.0.1:443/api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-2kqq5": dial tcp 10.96.0.1:443: connect: network is unreachable
      

      Q2: I plan to send http request to the load balancer via enp0s3 interface on cp01 node. do i need to :

      • reset the cluster and do the kube init again using 0.0.0.0 ip or,
      • is there a way to accomplish it without disturbing existing configurations/setup (using ingress?) ?

      Please advice. I have started learning kubernetes recently, so please excuse if i missed out on some basic concepts of the kubernetes world while framing these queries. Please do let me know if something is unclear.

      UPDATE:

      Q2: i tried initializing kubeadm node via -

      [root@cp01 ~]# kubeadm init --apiserver-advertise-address 0.0.0.0  --pod-network-cidr 10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock
      

      but with this method, worker nodes are unable to join cluster from 10.5.5.0/24 (enp0s8) network. they are able to join cluster as -

      kubeadm join 192.168.29.73:6443 --cri-socket  unix:///var/run/cri-dockerd.sock   --token 6srhn0.1lyiffiml08qcnfw --discovery-token-ca-cert-hash sha256:ab87e3e04da65176725776c08e0f924bbc07b26d0f8e2501793067e477ab6379
      
      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • How do I list pods sorted by label version in Kubernetes?

      I need to list some pods sorted by version and get the last index of it.

      I'm trying to do like:

      kubectl get pods 
      --namespace my-namespace
      --selector "app.kubernetes.io/name=my-cool-pod"
      --sort-by='.items[*].metadata.labels["app.kubernetes.io/version"]'
      --output jsonpath="{.items[-1:].metadata.name}
      

      But it is not working.

      Also, my pod labels are:

          labels:
            app.kubernetes.io/instance: my-instance
            app.kubernetes.io/managed-by: my-manager
            app.kubernetes.io/name: my-name
            app.kubernetes.io/version: x.y.z
      

      How to I list pods sorted my version?

      Thanks

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • In jenkins how to restrict users to select first default element with other options in extended choice parameter

      I have a jenkins job with multi select extended choice parameter. There are list of elements in a parameter. So, my requirement is I want to allow users to select multiple parameters excluding first element in a parameter. Means user should not able to select first element with other elements in a parameters. I am using jenkinsfile to create parameter.

      enter image description here

      Like shown above, users should not able to select 'None' with any other element in a parameter. Does anyone know how to do this?

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • RE: Why is AWS ALB not talking to an ingress controller?

      Assuming everything is running and working fine on AWS, the key pieces of information you mentioned above are the DNS record and the Ingress.

      I would assume you have defined the https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting .

      Then you'd also need to add, in AWS Route53, the appropriate DNS records to point the the ALB.

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • RE: Azure Devops solution for max excution time

      If you separate your tasks in to different jobs, each job has 60 min independently of the next job when running on a MS hosted agents and you can manually set a longer timeout if you use a self hosted agent for a job.

      For example my automated tests take longer then 60 minutes so I use a timeoutInMinutes Property on the job of 360 minutes, I am using a self hosted agent so that is why I am not limited to the 60min, more details on agents https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser

      - job: RunTests
          timeoutInMinutes: 360
          displayName: Automated tests Job
          variables:
          - group: eStrategyURLs      
          - name: Env
            value: 'UAT'
         ....
        steps:
            - checkout: QA-Automation
              path: s/$(qaTestCodeRepoName)
      
        - template: '..\SharedResources\CypressTests.Template.yml'
          parameters:
            emailList: $(Build.RequestedForEmail)
      

      ...

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • RE: Rename Terraform template script

      Yes you can move it in state.

      Something like:

      terraform state mv template_file.server1 template_file.new_name
      

      Once you move it in the terraform state, then you can rename it in the file.

      data "template_file" "new_name" {
          template = file("${path.module}/script1.ps1")
          vars = {}
      }
      

      If you run terraform plan you shouldnt see any changes.

      You will also need to update any references to the old name to point to the new name.

      If you are using a version of terraform greater than 0.12 you might want to remove the data source template_file completly and replace it with a call to the https://www.terraform.io/language/functions/templatefile .

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • RE: Does Jenkins 2.289.2 have a customizable workspace?

      Your first question is a duplicate of this https://stackoverflow.com/questions/34854377/how-to-change-workspace-and-build-record-root-directory-on-jenkins .

      Your second question. Each machine has a setting. So the Jenkins controller and the Jenkins Agent/s will each have its own setting.

      You can change the workspace https://www.jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#ws-allocate-workspace . But I have never seen anyone do it.

      I can't answer your last question because I don't have a jenkins with that plugin to look at but it is something that can be configured. I see in the README an example for setting https://github.com/jenkinsci/configuration-as-code-plugin .

      EDIT: Thanks to https://devops.stackexchange.com/users/13379/ian-w we now know that those setters were moves to system properties. With that information I was able to research more on JasC and found that you can set the rawBuildsDir if you use the restricted flag. But you can set the workspacedDir unless you drop into a groovy init file. You can find all the information in this GitHub https://github.com/jenkinsci/configuration-as-code-plugin/issues/151 .

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • RE: How do I upgrade or pin a Terraform Provider?

      Upgrading to the newest version

      You can upgrade the provider with

      terraform init -upgrade
      

      So long as you don't have the version specified anywhere in a required_providers block.

      Pinning with a required_providers block

      You can pin or specify any https://www.terraform.io/language/expressions/version-constraints with a required_providers block, like this.

      terraform {
        required_providers {
          aws = {
            source = "registry.terraform.io/hashicorp/aws"
            version = "4.14.0"
          }
        }
      }
      provider "aws" {
        region = "us-east-1"
      }
      

      Terraform pre 0.13

      Prior to Terraform version 0.13, you could specify your version in the provider block,

      provider "aws" {
        version = "4.14.0"
        region = "us-east-1"
      }
      

      Find the official docs at,

      • https://learn.hashicorp.com/tutorials/terraform/provider-versioning#upgrade-the-aws-provider-version
      • https://www.terraform.io/language/providers/requirements
      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • RE: validating map of objects

      You can validate your variable with the following expression:

      validation {
        condition     = length([for launch_type in values(var.ecs_config_map)[*].launch_type: launch_type if !contains(["EC2", "FARGATE", "EXTERNAL"], launch_type)]) 

      This might seem complex so let's brake it donw.

      We can extract the launch types from ecs_config_map with the following for expression:

      output "launch_types" {
        value = [for launch_type in values(var.ecs_config_map)[*].launch_type : launch_type]
      }
      

      The output of this would be something like:

      launch_types = [
        "FARGATE",
        "FARGATE",
      ]
      

      Moving on, we would want to filter out those launch types which are not in the allowed types array (["EC2", "FARGATE", "EXTERNAL"]). We can do this as such:

      output "not_allowed_launch_types" {
        value = [for launch_type in values(var.ecs_config_map)[*].launch_type: launch_type if !contains(["EC2", "FARGATE", "EXTERNAL"], launch_type)]
      }
      

      If the input is correct, this should output an empty array, otherwise we will have the incorrect launch types as the output.

      Final step is to validate if we have any incorrect launch types. We can do this with the length function, if the length of the array with the incorrect launch types is greater than 0, we have an invalid input.

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee
    • Issue in running docker container on ecs

      I am trying to run container on ecs but there is one specific image which is not running and give me error. I can see in docker container ec2 that image is pulled and it tried to run but it gave following error

      019ee43da7b91c68126e0d671" status="CREATED"
      level=info time=2022-05-05T09:36:40Z msg="Container change also resulted in task change" desiredStatus="RUNNING" knownStatus="CREATED" task="cb091374b6ca45629c4f41bfed9c16fb" container="toxic-container" runtimeID="a6801c731cfa6ddc54ae42e0022d679cd12feab019ee43da7b91c68126e0d671"
      level=info time=2022-05-05T09:36:40Z msg="Starting container" task="cb091374b6ca45629c4f41bfed9c16fb" container="toxic-container" runtimeID="a6801c731cfa6ddc54ae42e0022d679cd12feab019ee43da7b91c68126e0d671"
      level=error time=2022-05-05T09:36:41Z msg="Error transitioning container" container="toxic-container" runtimeID="a6801c731cfa6ddc54ae42e0022d679cd12feab019ee43da7b91c68126e0d671" nextState="RUNNING" error="Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: \"/\": permission denied: unknown" task="cb091374b6ca45629c4f41bfed9c16fb"
      level=info time=2022-05-05T09:36:41Z msg="Handling container change event" task="cb091374b6ca45629c4f41bfed9c16fb" container="toxic-container" runtimeID="a6801c731cfa6ddc54ae42e0022d679cd12feab019ee43da7b91c68126e0d671" status="RUNNING"
      level=warn time=2022-05-05T09:36:41Z msg="Error starting/provisioning container[%s (Runtime ID: %s)];" task="cb091374b6ca45629c4f41bfed9c16fb" container="toxic-container" runtimeID="a6801c731cfa6ddc54ae42e0022d679cd12feab019ee43da7b91c68126e0d671" error="Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: \"/\": permission denied: unknown"
      

      Steps to reproduce it

      1. I created ECS cluster on amazon with select EC2 Linux + Networking
      2. for Cluster name* sample
      3. select on-demand instance
      4. EC2 instance type: t2.large
      5. Number of instances* 1
      6. Key pair my key
      7. create new vpc
      8. security group used with ssh and 5000
      9. select ecsInstanceRole Then click to create

      On the task definition

      1. Create new task definition
      2. Ec2
      3. Task definition name : server
      4. Require capatibilities: Ec2
      5. click Add Container a. server-container b. image: quay.io/codait/max-toxic-comment-classifier c. Memory Limit 4096 d. port mapping: host Port: 5000 , container port: 5000
      6. Add

      Now task created so I click to run task, and it started, then it pull image and I can see image is pulled, but when it runs it give me above error.

      Docker image works perfectly on local system, and it is public image.

      How can I fix it?

      posted in Continuous Integration and Delivery (CI
      emmalee
      emmalee