Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. elysha
    E
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    elysha

    @elysha

    0
    Reputation
    30049
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    elysha Follow

    Best posts made by elysha

    This user hasn't posted anything yet.

    Latest posts made by elysha

    • Ansible - Configuring Supermicro servers via Redfish API

      I plan to configure Supermicro servers via the Redfish API. Ansible provides the following modules for the Redfish API:

      • https://docs.ansible.com/ansible/latest/collections/community/general/redfish_command_module.html
      • https://docs.ansible.com/ansible/latest/collections/community/general/redfish_config_module.html

      Supermicro offers https://www.supermicro.com/manuals/other/RedfishRefGuide.pdf Redfish Reference Guide, which describes its Redfish REST API in detail. However, the Ansible modules use some sort of commands to do the configurations, instead of URIs.

      Can someone explain how to map the API options to the Ansible modules? E. g., how can I use the Ansible modules to set the hostname or to provide NTP settings?

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • What is the PF field in k9s?

      In k9s (kubernetes management tool) what is the PF column (for pods)? What does the Ⓕ value mean?

      screenshot

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • Automatic builds based on commit and deploy

      I am trying to figure out how to get my Ubuntu docker server to react to code commits on a git-repo and build and deploy an image based on that.

      I have a git repo in Azure Devops that contains an Angular project (with a working dockerfile).

      I found that https://docs.docker.com/docker-hub/builds/ but it's a paid feature. enter image description here

      I have previously created build pipelines in Azure which work well (albeight very slowly), I am wondering if it is possible to get my Ubuntu VPS Docker host to react to git commits on the main branch and pull the latest code and build the docker image and replace it on the (local) server.

      Just to be clear, I want to do all this on my own VPS. Any guides or hints to keywords that I should be looking for are much appreciated.

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • kubernetes - trouble adding node to cluster

      by following the information in https://www.youtube.com/watch?v=o6bxo0Oeg6o&t=1s , I was able to get a control plane running on my kmaster vm.

      jason@kmaster:~$ kubectl get pods -A
      NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
      kube-system   calico-kube-controllers-59697b644f-tpxnn   1/1     Running   0          18h
      kube-system   calico-node-88l8f                          1/1     Running   0          18h
      kube-system   coredns-565d847f94-vpzwg                   1/1     Running   0          18h
      kube-system   coredns-565d847f94-wkv4p                   1/1     Running   0          18h
      kube-system   etcd-kmaster                               1/1     Running   0          18h
      kube-system   kube-apiserver-kmaster                     1/1     Running   0          18h
      kube-system   kube-controller-manager-kmaster            1/1     Running   0          18h
      kube-system   kube-proxy-wd2gh                           1/1     Running   0          18h
      kube-system   kube-scheduler-kmaster                     1/1     Running   0          18h
      

      Here are my network interfaces:

      jason@kmaster:~$ ip a
      1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: ens18:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
          link/ether e6:ec:4b:b8:37:7a brd ff:ff:ff:ff:ff:ff
          altname enp0s18
          inet 10.0.10.118/24 brd 10.0.10.255 scope global ens18
             valid_lft forever preferred_lft forever
          inet6 2600:8802:5700:46d::164d/128 scope global dynamic noprefixroute
             valid_lft 4863sec preferred_lft 2163sec
          inet6 2600:8802:5700:46d:e4ec:4bff:feb8:377a/64 scope global dynamic mngtmpaddr noprefixroute
             valid_lft 86385sec preferred_lft 14385sec
          inet6 fe80::e4ec:4bff:feb8:377a/64 scope link
             valid_lft forever preferred_lft forever
      3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
          link/ether 02:42:bb:62:1c:c5 brd ff:ff:ff:ff:ff:ff
          inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
             valid_lft forever preferred_lft forever
      4: calida7207728a2@if3:  mtu 1500 qdisc noqueue state UP group default
          link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet6 fe80::ecee:eeff:feee:eeee/64 scope link
             valid_lft forever preferred_lft forever
      5: cali919c5dc3a63@if3:  mtu 1500 qdisc noqueue state UP group default
          link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
          inet6 fe80::ecee:eeff:feee:eeee/64 scope link
             valid_lft forever preferred_lft forever
      6: cali0657a847784@if3:  mtu 1500 qdisc noqueue state UP group default
          link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
          inet6 fe80::ecee:eeff:feee:eeee/64 scope link
             valid_lft forever preferred_lft forever
      7: tunl0@NONE:  mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
          link/ipip 0.0.0.0 brd 0.0.0.0
          inet 192.168.189.0/32 scope global tunl0
             valid_lft forever preferred_lft forever
      

      I am now trying to add a node to the cluster by using the following command:

      sudo kubeadm join 10.0.10.118:6443 --token          --discovery-token-ca-cert-hash sha256:
      Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
      To see the stack trace of this error execute with --v=5 or higher
      

      As you can see it says 'please define which one you wish to use by setting the 'criSocket' field in the kubeadm configuration file. However when I try to edit /var/run/cri-dockerd.sock file, it says it's not a normal file:

      cri-dockerd.sock is not a regular file (use -f to see it)
      

      Here is my kubeadm config print:

      jason@kmaster:~$ kubectl get pods -A --watch
      NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE
      kube-system   calico-kube-controllers-59697b644f-tpxnn   1/1     Running   2 (6m56s ago)   16d
      kube-system   calico-node-88l8f                          1/1     Running   2 (6m56s ago)   16d
      kube-system   coredns-565d847f94-vpzwg                   1/1     Running   2 (6m51s ago)   16d
      kube-system   coredns-565d847f94-wkv4p                   1/1     Running   2 (6m51s ago)   16d
      kube-system   etcd-kmaster                               1/1     Running   2 (6m56s ago)   16d
      kube-system   kube-apiserver-kmaster                     1/1     Running   2 (6m55s ago)   16d
      kube-system   kube-controller-manager-kmaster            1/1     Running   2 (6m56s ago)   16d
      kube-system   kube-proxy-wd2gh                           1/1     Running   2 (6m56s ago)   16d
      kube-system   kube-scheduler-kmaster                     1/1     Running   2 (6m56s ago)   16d
      ^Cjason@kmaster:~kubeadm config print init-defaults
      apiVersion: kubeadm.k8s.io/v1beta3
      bootstrapTokens:
      - groups:
        - system:bootstrappers:kubeadm:default-node-token
        token: abcdef.0123456789abcdef
        ttl: 24h0m0s
        usages:
        - signing
        - authentication
      kind: InitConfiguration
      localAPIEndpoint:
        advertiseAddress: 1.2.3.4
        bindPort: 6443
      nodeRegistration:
        criSocket: unix:///var/run/containerd/containerd.sock
        imagePullPolicy: IfNotPresent
        name: node
        taints: null
      ---
      apiServer:
        timeoutForControlPlane: 4m0s
      apiVersion: kubeadm.k8s.io/v1beta3
      certificatesDir: /etc/kubernetes/pki
      clusterName: kubernetes
      controllerManager: {}
      dns: {}
      etcd:
        local:
          dataDir: /var/lib/etcd
      imageRepository: registry.k8s.io
      kind: ClusterConfiguration
      kubernetesVersion: 1.25.0
      networking:
        dnsDomain: cluster.local
        serviceSubnet: 10.96.0.0/12
      scheduler: {}
      

      Under nodeRegistration it says: nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock

      In /var/run/ I see cri-dockerd.sock

      jason@kmaster:/var/run$ ls -la
      total 28
      srw-rw----  1 root docker    0 Nov 15 00:25 cri-dockerd.sock
      

      How can I add my node to the cluster?

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • RE: How do you securely deploy large number of Kubernetes components in isolation?

      Q1a) How do you scale the Kubernetes administration/control plane? How do you scale from 1 etcd server to 10 etcd servers for example?

      You won't. etcd needs to be fast, especially with a growing cluster. 3 members is what you'll want to have. Usually, as a result, you would also deploy 3x kube apiserver, controller-manager & scheduler pods.

      Q1b) In a large organization where there are different business units, do you deploy one K8S instance (active/passive) for each business unit, or multiple K8S instances serving the entire organization?

      Large organizations would usually go with several clusters. Don't put all your eggs in the same basket. Make sure you can upgrade some of your clusters, without impacting the others. Allowing end-users to implement their own disaster recovery, managing resources in "sibling" clusters -- kind of active/passive, without designating one or the other as passive, rather planning for each of your clusters to double in size over night, should you need to.

      Which doesn't mean they would be small clusters. You could easily have 100s of workers (my current customer's largest cluster has between 350 and 400 nodes, cluster-autoscaler adjusting size base on requested resources). But as much as possible, you want to avoid beasts like those: monitoring or logging stack would consume a lot, requiring larger nodes hosting infra components, operations on those components would become more slow and painful, ... better have two small than one large.

      Although here: automation would be critical. your team probalby can't afford to micro-manage 40, 80, 200 clusters. You would have to figure out a way to rollout changes to your clusters with little effort. And consistently. Might involve tools such as Ansible, Terraform, Tekton or ArgoCD.

      Q2) How do you reconcile multiple instances of Kubernetes to get a master view in order to monitor all the instances of containers running on Kubernetes?

      I wouldn't. The more you grow, the more metrics/data you would collect, the more complicated it would be to monitor everything from one point, the more likely rules evaluation would be late eventually, maybe misses alerts, ...

      Better deploy one Prometheus (or more) per cluster, self-monitoring your cluster. Then, pick two or three "ops" clusters, where you would deploy another Prometheus, making sure the Prometheuses in your other clusters work as expected (can server query, alertmanager is present, alertmanager can relay alerts, ... could be pretty much all based on blackbox exporter).

      Configure alertmanagers centralizing alerts into a single place (rocketchat, slack channel, stuff like opsgenie).

      Going further, you could look at solutions such as Thanos. You may aggregate metrics from several prometheuses .... although you'll need lots of RAM running thanos components. and some reliable s3 bucket aggregating those metrics into one location. It's definitely not something I would recommend for monitoring purposes, but it can be nice for metrology, making cool grafana dashboards, ...

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • How to run a task from a playbook to a specific host

      I'm writing an ansible playbook to manage backup and I want two different tasks :

      - name: Setup local machine for backup
        cron:
          cron_file: /etc/cron.d/backup
          hour: 4
          minute: 0
          job: /root/do_backup.sh
          state: present
          name: backup
      
      • name: Setup backup server for new machine
        shell:
        cmd: "mkdir /backups/{{inventory_hostname}}"

      Is it possible to tell ansible that second task is intended to be executed on another machine of my inventory ?

      I don't want a dedicated playbook, because some later tasks should be executed after the task on backup server.

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • RE: Kubernetes Job Metrics in Prometheus

      I think your need may be an anti-pattern but I'm unsure.

      Metrics are used to measure things and Metrics are correlated with time.

      In your example, the job's output is a constant and not correlated with a time.

      Metrics are often (!) measurements of the health (of the state) of a system rather than the output (product) of a system.

      The job's duration, CPU, memory, success|failure etc. are conventional (!) measurements.

      While it's entirely reasonable to want to capture time-series data (from Jobs), it may be (!) that a database or some other persistence mechanism would be a better sink for your data.

      Answering your question as stated: Jobs are challenging because they are potentially unscrapeable (because they don't live sufficiently long) by Prometheus as part of its 'pull' mechanism.

      Batch jobs are a valid use-case for https://prometheus.io/docs/practices/pushing/ . So, your Jobs could push metrics to Pushgateway to ensure that these are captured.

      If the logs produced by your Jobs are persisted beyond the life of the Job, another approach is to derive metrics (by parsing) logs. This approach biases towards aggregation of log data. The examplar is counting HTTP 500s in log entries to determine a failure rate.

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • How to don't start entrypoint command on "docker-compose up"?

      I have multiple Docker containers in a project and I use docker-compose up -d to start containers.

      This is my docker-compose.yml file:

      version: "3"
      services:
        httpd:
          image: 'nginx:stable-alpine'
          ports:
            - '80:80'
          volumes:
            - ./laravel:/var/www/html
            - ./.docker-config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
          depends_on:
            - php
            - mysql
          networks:
            - backstage
      

      php:
      build:
      context: ./.docker-config/dockerfiles
      dockerfile: php.dockerfile
      volumes:
      - ./laravel:/var/www/html:delegated
      networks:
      - backstage

      mysql:
      image: mysql:5.7
      env_file:
      - ./.docker-config/mysql/mysql.env
      ports:
      - '33060:3306'
      networks:
      - backstage

      composer:
      build:
      context: ./.docker-config/dockerfiles
      dockerfile: composer.dockerfile
      volumes:
      - ./laravel:/var/www/html
      networks:
      - backstage

      artisan:
      build:
      context: ./.docker-config/dockerfiles
      dockerfile: php.dockerfile
      volumes:
      - ./laravel:/var/www/html
      entrypoint: ["php", "/var/www/html/artisan"]
      depends_on:
      - mysql
      networks:
      - backstage

      npm:
      image: node:14-alpine
      working_dir: /var/www/html
      entrypoint: ["npm"]
      volumes:
      - ./laravel:/var/www/html
      networks:
      - backstage

      phpunit:
      build:
      context: ./.docker-config/dockerfiles
      dockerfile: php.dockerfile
      volumes:
      - ./laravel:/var/www/html
      entrypoint: ["vendor/bin/phpunit"]
      networks:
      - backstage

      As you see, I defined entrypoint for the phpunit container, but I don't want to start the phpunit when I run docker-compose up -d.

      How can I do that?

      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • RE: How to verify docker sha256 on hub.docker with docker?

      Use this command

      docker images --digests
      

      You'll get this output

      REPOSITORY    TAG       DIGEST                                                                    IMAGE ID       CREATED        SIZE
      hello-world   latest    sha256:80f31da1ac7b312ba29d65080fddf797dd76acfb870e677f390d5acba9741b17   feb5d9fea6a5   8 months ago   13.3kB
      
      posted in Continuous Integration and Delivery (CI
      E
      elysha
    • RE: Unable to authenticate with Terraform AWS provider

      If you have multiple profiles in your credentials file, you need to state which profile you want to use, otherwise it will use [default]:

      credentials file:

      [default]
      aws_access_key_id = xxxxxxxx
      aws_secret_access_key = xxxxxxxx
      [profile2]
      aws_access_key_id = yyyyyyyy
      aws_secret_access_key = yyyyyyyyy
      

      terraform code:

      provider "aws" {
         region = var.aws_region
         shared_credentials_file = "/Users/samueldare/.aws/credentials"
         profile = "profile2"
      }
      

      As stated in the https://registry.terraform.io/providers/hashicorp/aws/latest/docs#shared-configuration-and-credentials-files

      posted in Continuous Integration and Delivery (CI
      E
      elysha