Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. derk
    D
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    derk

    @derk

    0
    Reputation
    29785
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    derk Follow

    Best posts made by derk

    This user hasn't posted anything yet.

    Latest posts made by derk

    • How does Github Actions work with docker containers?

      Consider this GA workflow:

      name: My GA Workflow
      

      on: push

      jobs:
      myJobName:
      runs-on: ubuntu-latest
      container: cypress/included:10.6.0
      steps:
      - name: Ensure tools are available
      shell: bash
      run: |
      apt-get update &&
      apt-get install -y unzip zstd netcat

      # and so on...
      

      I would like to have a crystal clear understanding what happens there. Currently I reckon:

      1. GA will run a ubuntu-latest virtual machine with docker engine pre-installed.
      2. It will pull and run cypress/included:10.6.0.
      3. All the steps will run inside the Cypress docker container, not on the Ubuntu machine.

      Is that correct?

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • Rationale for using Docker to containerize applications

      I'm trying to get a better understanding of the reasons to use [and not use] Docker based on specific use cases.

      From my current understanding, Docker helps to isolate applications and their dependencies within containers. This is useful to ensure consistent reproducible builds in varied environments.

      However, I'm struggling to understand the rationale of using Docker where the environments are essentially the same, and the applications are relatively simple.

      Say I have the following:

      • a cloud VM instance (DigitalOcean, Vultr, Linode, etc.) with 1Gb RAM running Ubuntu 20.
      • a Node.js Express app (nothing too complicated)

      The following issues come to the fore:

      1. Dockerizing this application will produce an image that is ~100Mb after optimization (without optimization probably 500Mb or higher based on my research). The app could be 50Kb in size, but the Docker container dependencies to run it are significantly higher by a factor of up to 10,000 or above. This seems very unreasonable from an optimization standpoint.

      2. I have to push this container image to a hub before I can use Docker to consume it. So that's 500Mb to the hub, and then 500Mb down to my VM instance; total of about 1Gb of bandwidth per build. Multiply this by the number of times the build needs to be updated and you could be approaching terabytes in bandwidth usage.

      3. I read in a https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-22-04 that before I can run my container image, I have to do the following:

      docker pull ubuntu
      

      This pulls an image of Ubuntu. But, I'm already on Ubuntu, so does this mean I'm running a container that's running Ubuntu inside an existing VM that is running Ubuntu? This appears to be needless duplication, but I'd appreciate clarification.

      1. The https://docs.docker.com/desktop/install/linux-install/ specify that I should have 4Gb RAM. This means I have to use more expensive VM instances even when my application does not necessarily require it.

      How exactly does containerization [using Docker or similar] optimize and enhance the DevOps experience, especially on an ongoing basis?

      I'm not quite getting it but I'm open to clarification.

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • Does docker engine (not Desktop) support Linux containers on Windows 11?

      I've installed docker engine according to the following instructions: https://docs.docker.com/engine/install/binaries/#install-server-and-client-binaries-on-windows

      I'm trying to run Linux containers on Windows 11, without Docker Desktop:

      PS C:\> docker run -d -p 80:80 docker/getting-started
      Unable to find image 'docker/getting-started:latest' locally
      latest: Pulling from docker/getting-started
      docker: no matching manifest for windows/amd64 10.0.22000 in the manifest list entries.
      See 'docker run --help'.
      

      I believe I'm getting the above error because dockerd is configured for Windows containers:

      PS C:\> docker info  -f '{{.OSType}}/{{.Architecture}}'
      windows/x86_64
      

      I've tried to use DockerCli.exe -SwitchLinuxEngine however it doesn't seem to have been installed:

      PS C:\> DockerCli.exe
      DockerCli.exe : The term 'DockerCli.exe' is not recognized as the name of a cmdlet ...
      

      How can I switch to Linux containers?

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • RE: Is it possible to log into a new EC2 instance for the first time using a non-default user?

      The users who are present in a newly launched EC2 instance are the ones that exist in the image you launch it from.

      The images provided by the Linux distros tend to have only one default user, but you can build your own custom image with other users, and launch your EC2 servers from your image/images.

      And there's cloud-init config that can create a user at launch time, too (as a comment pointed out already).

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • RE: Helm Error: INSTALLATION FAILED : manifests contain a resource that already exists

      You can see everything helm has installed with

      helm list --all-namespaces
      

      Which should return something like this

      ❯ helm list --all-namespaces
      NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION 
      my-mysql-operator       mysql           1               2022-05-26 12:18:28.652497947 -0500 CDT deployed        mysql-operator-2.0.4    8.0.29-2.0.4
      

      The problem here is that the resource my-mysql-operator is currently installed into the mysql namespace. In order to delete it and recreate it you can do,

      helm uninstall -n mysql my-mysql-operator
      

      Then you should be able to run the above command.

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • How can I unevenly distribute pods across Instance Groups

      What I'd like to do is distribute 10% of my pods to one instance group with the other 90% on the other group.

      I would like to experiment with using different AWS instance types (AMD, graviton, etc) but I only want to put a limited number of pods onto these instances. Ideally, I'd like to do it by service.

      I've looked into PodTopologySpread but it appears to try to get roughly even distributions within a range. Alternatively, set the pod scheduler to prefer the different instance type and just limit how many nodes I can run but that doesn't work well if multiple services are in the instance group. The final option is to have two deployment objects with different node counts and but the same service selectors. That would work but seems like a work to tweak pod distributions.

      How can I distribute a small number of pods to different instance types?

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • RE: How to connect from Docker container to the host?

      If the container needs to connect to a host or a remote service, you need his ip or domain. Any other workaround only will work on your laptop or is just for academic purposes, not valid in the real scenarios with real users. For example: a container in aws needs to connect to an azure mysql.

      In your case, you need the local ip. As I explained here https://stackoverflow.com/a/52213178/3957754 you could get the ip before the docker-compose up:

      export MACHINE_HOST_IP=$(hostname -I | awk '{print $1}')
      docker-compose up ...
      

      And in your compose file use this value. For example of you have a mysql on your host and your container need it, you could use:

      version: '3.2'
      services:
        wordpress:
          environment:
            DB_HOST : ${MACHINE_HOST_IP}:3306
            DB_USER : usr_wordpress
      

      Review:

      • https://stackoverflow.com/questions/52173352/what-is-the-difference-between-publishing-808080-and-80808080-in-a-docker-run/52213178#52213178
      • https://stackoverflow.com/questions/63136530/how-to-connect-api-with-web-spa-through-docker/63207679#63207679
      posted in Continuous Integration and Delivery (CI
      D
      derk
    • RE: How to use the Shared Workspace plugin in a Jenkins pipeline?

      Considering that plugin hasn't been updated in 7 years, it almost certainly doesn't have Pipeline support - Pipelines were likely either nonexistent or not widely used last time that plugin received an update.

      You can still try to use it in a Pipeline job. Jenkins has a https://www.jenkins.io/doc/book/pipeline/getting-started/#snippet-generator to automatically generate Pipeline code using Freestyle job steps in the web UI. However there is no guarantee that the Pipeline code output by this tool will work identically to the equivalent Freestyle configuration.

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • Create AWS SG and use it

      I am trying to create a security group (SG) using Terraform and then use it for an AWS instance.

      My config looks like

      resource "aws_security_group" "my_sq" {
        vpc_id = aws_vpc.mainvpc.id
        name = "my_sg"
        ingress {
          cidr_blocks = [
            "0.0.0.0/0"
          ]
          from_port = 22
          to_port = 22
          protocol = "tcp"
        }
      

      }

      resource "aws_instance" "my_new_instance" {
      ami = "AMI-ID"
      instance_type = "t2.micro"
      security_groups = ["my_sg"]
      }
      }

      I tried assigning the SG by name and id. When I ran terraform plan everything is all right. When I tried to apply settings terraform apply I see this error:

      │ Error: Error launching instance, possible mismatch of Security Group IDs and Names.
      

      How do I use the new SG which I created in the config file?

      posted in Continuous Integration and Delivery (CI
      D
      derk
    • RE: Ansible no user $HOME by default - so how do I run commands

      I assume that you are referring to https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-v similar like

      TASK [Task] *****************************************************************************************************************
      task path: taskFile:
      ...
       ESTABLISH ... CONNECTION FOR USER: {{ ansible_user }}
       EXEC /bin/sh -c 'echo ~{{ ansible_user }} && sleep 0'
       EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/{{ ansible_user }}/.ansible/tmp `"&& mkdir "` echo /home/{{ ansible_user }}/.ansible/tmp/ansible-tmp-1234567890 `" && echo ansible-tmp-1234567890="` echo /home/{{ ansible_user }}/.ansible/tmp/ansible-tmp-1234567890 `" ) && sleep 0'
      Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/find.py
      ...
      

      Can I get Ansible to use another directory ...

      According an https://docs.ansible.com/ansible/2.3/intro_configuration.html#local-tmp , it is possible to change that value. The configuration parameter https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-local-tmp might still be available in latest version.

      When Ansible gets ready to send a module to a remote machine ... The default location is a subdirectory of the user’s home directory. If you’d like to change that, you can do so by altering this setting

      According

      Further Q&A

      • https://devops.stackexchange.com/questions/10703/

      it should be the variable https://docs.ansible.com/ansible/latest/collections/ansible/builtin/sh_shell.html#parameters .

      posted in Continuous Integration and Delivery (CI
      D
      derk