Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. juvenalb
    J
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    juvenalb

    @juvenalb

    0
    Reputation
    29235
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    juvenalb Follow

    Best posts made by juvenalb

    This user hasn't posted anything yet.

    Latest posts made by juvenalb

    • RE: How to lock a user using ansible?

      ... mostly achieves the lock behavior, but it also creates the user. The state property only has possible values present and absent ...

      Right, that would be the best approach and expected behavior.

      One can obtain a fact on the user presence using ansible.builtin.getent and then conditionally use ansible.builtin.user.

      And also right, this would be too the recommended approach.

      In other words, currently there is no possibility to configure such within one single task and if not using shell module, custom scripts or custom modules.

      Similar Q&A

      • https://serverfault.com/a/1035610/448950
      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • which path has to be specified in httpGet handler in kubernetes?

      I just started learning Kubernetes and I am confused about the 'path' parameter in httpGet handler in kubernetes probes. can anyone explain about that please?

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • Escape quotes and commas in Docker volume paths using bind-mount syntax

      In order for me to write the most bullet-proof scripts as possible that don't break with possible edge cases (no matter how unlikely), how do I escape commas and quotes in Docker bind-mount paths using --mount syntax?

      Note that https://docs.docker.com/storage/bind-mounts/ say everyone is "encouraged to use" --mount instead of -v, but the case of -v, I have a similar question for how to escape : in paths.

      For example,

      cd /Users/name
      mkdir te,\"st
      touch te,\"st/file.txt
      docker run -it --rm --mount 'type=bind,source=/Users/name/te??st,target=/usr/test' alpine ash
      

      Wher e?? needs to be the comma , and double quote " from the te,"st directory created above.

      • I tried wrapping the value after source= in double quotes just to test if it will accept a comma, but get error bare " in non-quoted-field. This attempt was based on the info-box "Escape values from outer CSV parser" in the https://docs.docker.com/storage/volumes/#choose-the--v-or---mount-flag section of Docker storage volumes docs. But this field clearly doesn't like double quotes, at all, ever.
      • Single quotes, double commas, and backslashes in the volume path don't seem to work, with errors such as invalid field 'st' must be a key=value pair.
      • With this, I am out of guesses. I tried a few random stabs in the dark to see if Docker would do environment variable expansion instead of the shell, but nothing is working.

      Is it simply impossible to work with paths or volumes that contain a comma or a double quote in their names? Can I instruct Docker to use environment variables somehow so it does the environment variable expansion instead of the shell?

      Or, do I need to use a compose.yaml file to get around this limitation so I can present the parts/pieces of the mounts in a different encoding/format?

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • How does one perform systems testing against multiple interdependent machines?

      I'm looking for a platform that allows me to run tests against multiple, closely-coupled systems. We deploy different products to Red Hat family machines - think Rocky Linux, CentOS, Fedora, RHEL, etc. This is done using SaltStack, which allows us to completely set up blank VMs (in development they are built on QEMU using Vagrant, on a linux host). To date, regression testing has been done by hand, and the extent of our automation has been Test Kitchen ensuring that highstates return clean, coupled with some basic InSpec profiles in some cases (is port listening, etc).

      It has become necessary to perform more testing as these systems become more complicated. For example, take a cluster of three VMs, which each have services used by the others, all heavily configured with SaltStack. It seems that the standard advice for testing is to test each system individually with tools like Chef's InSpec, or testinfra. However, I'm looking for a way to test the cluster overall.

      I'm looking for a way to test all my services in different cases. For example, I should be able to kick off a test that:

      1. Highstates one blank machine
      2. Uses a client VM to access the public service of the first (i.e. a basic MySQL query)
      3. Highstates a second server VM
      4. Tests both services again
      5. Disables a network interface on one VM to test split-brain functionality

      ... etc

      I could do all of this with a long bash or Python script and many SSH commands to each VM, but I would prefer to not reinvent the wheel. Is there a testing framework that can run tests on multiple machines, in order as I define, that allows me to test the whole system in aggregate?

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • RE: Vscode/pytest gives me an error when importing

      I'm guessing you created and activated in your terminal?

      You need to tell vscode about the venv. It's pretty good at finding and using them but sometimes it dectects the wrong one.

      At the bottom of the vscode window you should see the environment listed. If its the wrong one click it and find the right one.

      The python venv selector in vscode task bar

      Also your launch.json and setup file seems wrong.

      The program should probably be script or python3 ${file} in launch.json.

      In your setup file scripts should be scripts=bin/my-cool-program. Then you can change program in launch.json to my-cool-program. You should also be able to run it from your terminal like $ my-cool-program -q data.

      This is a great way to learn, but it's not how I would create a CLI program today in python. I think you would be better off using poetry and click. Here is an https://cjolowicz.github.io/posts/hypermodern-python-01-setup/ to "modern" python.

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • Cannot start Kubernetes Dashboard

      I'm trying to install Kubernetes Cluster with Dashboard on Ubuntu 20.04 TLS using the following commands:

      swapoff -a
      

      Remove following line from /etc/fstab

      /swap.img none swap sw 0 0

      sudo apt update
      sudo apt install docker.io
      sudo systemctl start docker
      sudo systemctl enable docker

      sudo apt install apt-transport-https curl
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
      echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
      sudo mv ~/kubernetes.list /etc/apt/sources.list.d
      sudo apt update
      sudo apt install kubeadm kubelet kubectl kubernetes-cni

      sudo kubeadm init --pod-network-cidr=192.168.0.0/16

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config

      kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
      kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

      kubectl proxy --address 192.168.1.133 --accept-hosts '.*'

      But when I open http://192.168.1.133:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy

      I get:

      {
        "kind": "Status",
        "apiVersion": "v1",
        "metadata": {},
        "status": "Failure",
        "message": "services \"kubernetes-dashboard\" not found",
        "reason": "NotFound",
        "details": {
          "name": "kubernetes-dashboard",
          "kind": "services"
        },
        "code": 404
      }
      

      I tried to list the pods:

      root@ubuntukubernetis1:~# kubectl get pods --all-namespaces
      NAMESPACE              NAME                                         READY   STATUS              RESTARTS       AGE
      kube-flannel           kube-flannel-ds-f6bwx                        0/1     Error               11 (29s ago)   76m
      kube-system            coredns-6d4b75cb6d-rk4kq                     0/1     ContainerCreating   0              77m
      kube-system            coredns-6d4b75cb6d-vkpcm                     0/1     ContainerCreating   0              77m
      kube-system            etcd-ubuntukubernetis1                       1/1     Running             1 (52s ago)    77m
      kube-system            kube-apiserver-ubuntukubernetis1             1/1     Running             1 (52s ago)    77m
      kube-system            kube-controller-manager-ubuntukubernetis1    1/1     Running             1 (52s ago)    77m
      kube-system            kube-proxy-n6ldq                             1/1     Running             1 (52s ago)    77m
      kube-system            kube-scheduler-ubuntukubernetis1             1/1     Running             1 (52s ago)    77m
      kubernetes-dashboard   dashboard-metrics-scraper-7bfdf779ff-sdnc8   0/1     Pending             0              75m
      kubernetes-dashboard   dashboard-metrics-scraper-8c47d4b5d-2sxrb    0/1     Pending             0              59m
      kubernetes-dashboard   kubernetes-dashboard-5676d8b865-fws4j        0/1     Pending             0              59m
      kubernetes-dashboard   kubernetes-dashboard-6cdd697d84-nmpv2        0/1     Pending             0              75m
      root@ubuntukubernetis1:~#
      

      Checking kube-flannel pod logs:

      kubectl logs -n kube-flannel kube-flannel-ds-f6bwx -p
      Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
      I0724 14:49:57.782499       1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
      W0724 14:49:57.782676       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
      E0724 14:49:57.892230       1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-f6bwx': pods "kube-flannel-ds-f6bwx" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
      

      Do you know how I can fix the issue?

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • RE: How can I find what options I can set with a `helm install` that the chart provides?

      You can use show values to see what options the chart provides,

      helm show values mysql-operator/mysql-innodbcluster
      

      What you'll get is a YAML file, here is a part of the above YAML output,

      tls:
        useSelfSigned: false
      #  caSecretName:
      #  serverCertAndPKsecretName:
      #  routerCertAndPKsecretName:
      

      You an set the option for useSelfSigned as tls.useSelfSigned like this:

      helm install mycluster mysql-operator/mysql-innodbcluster --set tls.useSelfSigned=true
      
      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • Unable to ssh into EC2 instance to the peered VPC which is in different region

      It's a very simple setup in which there are two EC2 instances each in a different region but under the same account. Both are peered.

      Requester EC2 EC2Virginia has the following details:

      Public IP: 34.201.42.246
      Private IP: 172.30.3.42
      Subnet is in US-east-1a
      VPC CIDR: 172.30.0.0/16
      Allow requester VPC to resolve DNS of hosts in accepter VPC to private IP addresses is enabled for both requester and accepter.
      Route: Destination is 172.31.0.0/16 using peering connection
      Security group accepts all inbound and outbound traffic
      

      Accepter EC2 EC2Ireland has the following details:

      Public IP: 3.248.183.191
      Private IP: 172.31.28.244
      Subnet is in eu-west-1c
      VPC CIDR: 172.31.0.0/16
      Allow requester VPC to resolve DNS of hosts in accepter VPC to private IP addresses is enabled for both requester and accepter.
      Route: Destination is 172.30.0.0/16 using peering connection
      Security group accepts all inbound and outbound traffic
      

      I am trying to do ssh from EC2Virginia to EC2Ireland and it is failing.

      [ec2-user@ip-172-30-3-42 ~]$ ssh -i "irelandconnect.pem" ec2-user@172.31.28.244
      ssh: connect to host 172.31.28.244 port 22: Connection timed out
      

      I executed following route command on EC2Virginia:

      [ec2-user@ip-172-30-3-42 ~]$ routel
               target            gateway          source    proto    scope    dev tbl
              default         172.30.0.1                                     eth0 
      169.254.169.254                                                        eth0 
          172.30.0.0/ 20                     172.30.3.42   kernel     link   eth0 
            127.0.0.0          broadcast       127.0.0.1   kernel     link     lo local
           127.0.0.0/ 8            local       127.0.0.1   kernel     host     lo local
            127.0.0.1              local       127.0.0.1   kernel     host     lo local
      127.255.255.255          broadcast       127.0.0.1   kernel     link     lo local
           172.30.0.0          broadcast     172.30.3.42   kernel     link   eth0 local
          172.30.3.42              local     172.30.3.42   kernel     host   eth0 local
        172.30.15.255          broadcast     172.30.3.42   kernel     link   eth0 local
                  ::/ 96     unreachable                                       lo 
      ::ffff:0.0.0.0/ 96     unreachable                                       lo 
          2002:a00::/ 24     unreachable                                       lo 
         2002:7f00::/ 24     unreachable                                       lo 
         2002:a9fe::/ 32     unreachable                                       lo 
         2002:ac10::/ 28     unreachable                                       lo 
         2002:c0a8::/ 32     unreachable                                       lo 
         2002:e000::/ 19     unreachable                                       lo 
         3ffe:ffff::/ 32     unreachable                                       lo 
              fe80::/ 64                                   kernel            eth0 
                  ::1              local                   kernel              lo local
      fe80::5c:7dff:fea5:1d59              local                   kernel            eth0 local
            multicast                           
      

      Please help resolve this issue.

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • RE: TerraForm separate stages

      I don't think there are any best practices because it would depend on both your company structure as well as how your developers manage their code in SCM (gitflow, trunk, etc).

      Lets start with the statements that seem to be in contradiction. I think quoted like that they do seem to tell a different story. But lets look at the start of the paragraph where the first quote came from.

      When Terraform is used to manage larger systems, teams should use multiple separate Terraform configurations that correspond with suitable architectural boundaries within the system so that different components can be managed separately and, if appropriate, by distinct teams.

      So it's saying that when you have multiple teams using the same terraform directory (aka configuration) then...

      Workspaces alone are not a suitable tool for system decomposition, because each subsystem should have its own separate configuration and backend, and will thus have its own distinct set of workspaces.

      Which makes sense. Each team will need different things from their workspace and since all the teams will be sharing workspaces then it kinda falls apart. So in conclusion, different directories and workspaces can be used to separate environments in Terraform but they have certain limitations. In the tutorial you linked, I think it's explained even better:

      To separate environments with potential configuration differences, use a directory structure. Use workspaces for environments that do not greatly deviate from one another, to avoid duplicating your configurations.

      The second part of your question about recommendations can get somewhat complicated depending on what level of CICD and automation your company currently has. If your infra is small and the number of changes every month is pretty minimal then it's probably fine to just checkout the code, make the change and deploy it locally.

      But if you are a large organization with 100's of changes a week, then you are probably going to want to automate the deployments. A common way of doing that is to trigger CICD on new PR's or changes to certain branches. From there you can have your CICD engine change workspaces based on the branch name. Or more commonly for older companies who used Terraform before workspaces existed, you can just set the state path to something like ${env}/tf.state because that really has the same effect as just using workspaces.

      I'm a Terraform and https://github.com/gruntwork-io/terratest/releases/tag/v0.28.5 contribtutor. I'm also the author of https://github.com/DontShaveTheYak/cf2tf and the https://github.com/DontShaveTheYak/terraform-module-template .

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb
    • "ipv6_cidr_block": all of `ipv6_cidr_block,ipv6_ipam_pool_id` must be specified

      Usage: import

      Version: Terraform v1.1.9

      • provider registry.terraform.io/hashicorp/aws v4.11.0

      I've something like this defined:

        assign_generated_ipv6_cidr_block = "true"
        cidr_block                       = "10.0.0.0/16"
        enable_dns_hostnames             = "true"
        enable_dns_support               = "true"
        instance_tenancy                 = "default"
        ipv6_cidr_block                      = "2600:0c00:eaa:a$$$::/56"
        ipv6_cidr_block_network_border_group = "$region"
        # ipv6_netmask_length                  = "56"
      

      But keep getting:

      ╷
      │ Error: Missing required argument
      │ 
      │   with aws_vpc.$vpc_Name,
      │   on main.tf line 7, in resource "aws_vpc" "$vpc_Name":
      │    7:   ipv6_cidr_block                      = "2600:0c00:eaa:a$$$::/56"
      │ 
      │ "ipv6_cidr_block": all of `ipv6_cidr_block,ipv6_ipam_pool_id` must be specified
      

      I've seen this:

      ipv6_cidr_block - can be set explicitly or derived from IPAM using ipv6_netmask_length.

      ipv6_ipam_pool_id - conflicts with assign_generated_ipv6_cidr_block.

      ipv6_netmask_length - conflicts with ipv6_cidr_block. This can be omitted if IPAM pool as a allocation_default_netmask_length set.

      IPAM is not defined for the account:

      aws ec2 describe-ipam-pools
      - IpamPools: []
      

      And this happens whether "ipv6_netmask_length" is set or not.

      Please, what am I doing wrong and how do I set it right?

      Thanks for the help.

      posted in Continuous Integration and Delivery (CI
      J
      juvenalb