Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Anderson
    A
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Anderson

    @Anderson

    1
    Reputation
    29931
    Posts
    3
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Anderson Follow

    Best posts made by Anderson

    • RE: How to get logs with failed tests from the server (pytest)?

      The documentation suggests doing this using hooks. Sample code (should be in conftest.py file):

      import pytest
      import os.path
      
      @ pytest.hookimpl (tryfirst = True, hookwrapper = True)
      def pytest_runtest_makereport (item, call):
           # execute all other hooks to obtain the report object
           outcome = yield
           rep = outcome.get_result ()
      
           # we only look at actual failing test calls, not setup / teardown
           if rep.when == "call" and rep.failed:
               mode = "a" if os.path.exists ("failures") else "w"
               with open ("failures", mode) as f:
                   # let's also access a fixture for the fun of it
                   if "tmpdir" in item.fixturenames:
                       extra = "(% s)"% item.funcargs ["tmpdir"]
                   else:
                       extra = ""
      
                   f.write (rep.nodeid + extra + "\ n")
      

      https://docs.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures

      posted in Automated Testing
      A
      Anderson

    Latest posts made by Anderson

    • RE: Options to build a production server as a solo python developer?

      One option that you could consider is using a continuous deployment tool like Jenkins, Travis CI, or GitLab CI. These tools can automatically deploy your code to a production server whenever you push your code to a certain branch in your git repository. This can save you the time and effort of manually deploying your code to your production server every time you make a change.

      Another option is to use a service like Heroku, which makes it easy to deploy and manage your web applications. With Heroku, you can simply push your code to a git repository, and Heroku will automatically build and deploy your application.

      Ultimately, the best option will depend on your specific needs and preferences, so it may be worth experimenting with different approaches to see what works best for you.

      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • How to tell helm not to deploy a resource or remove it if a value is set to "false"?

      I am working on an HPA template that will be applied only if the enabled value is set to true. Currently when setting enabled to false, it will create an empty object in yaml. This is then applied with an error stating that there is no apiVersion defined. How can I tell helm to not apply the HPA template if the value is set to false our skip the resource templating?

      values.yaml:

      # hpa
      hpa:
        enabled: false
        maxReplicas: 10
        minReplicas: 2
        metrics:
          - type: Resource
            resource:
              name: cpu
              target:
                type: Utilization
                averageUtilization: 70
          - type: Resource
            resource:
              name: memory
              target:
                type: Utilization
                averageUtilization: 70
      

      hpa.yaml:

      {{- template "common.hpa" (list . "service.deployment") -}}
      {{- define "service.deployment" -}}
      {{- end -}}
      

      _hpa.yaml:

      {{- define "common.hpa.tpl" -}}
      {{ if .Values.hpa.enabled }}
      ---
      apiVersion: autoscaling/v2beta2
      kind: HorizontalPodAutoscaler
      metadata:
        creationTimestamp: null
        name: {{ required "serviceName value is required" $.Values.serviceName }}
        namespace: {{ required "namespace value is required" $.Values.namespace }}
      spec:
        maxReplicas: {{ .Values.hpa.maxReplicas }}
        minReplicas: {{ .Values.hpa.minReplicas }}
        scaleTargetRef:
          apiVersion: apps/v1
          kind: Deployment
          name: {{ required "serviceName value is required" $.Values.serviceName }}
        metrics:
      {{ toYaml .Values.hpa.metrics | indent 4 }}
      {{- end -}}
      {{- end -}}
      

      {{- define "common.hpa" -}}
      {{- include "common.util.merge" (append . "common.hpa.tpl") -}}
      {{- end -}}

      _util.yaml

      {{- /*
      common.util.merge will merge two YAML templates and output the result.
      This takes an array of three values:
      - the top context
      - the template name of the overrides (destination)
      - the template name of the base (source)
      */}}
      {{- define "common.util.merge" -}}
      {{- $top := first . -}}
      {{- $overrides := fromYaml (include (index . 1) $top) | default (dict ) -}}
      {{- $tpl := fromYaml (include (index . 2) $top) | default (dict ) -}}
      {{- toYaml (merge $overrides $tpl) -}}
      {{- end -}}
      

      output from running helm template

      ---
      # Source: service/templates/hpa.yaml
      {}
      

      error message when doing a helm install:

      Error: UPGRADE FAILED: error validating "": error validating data: [apiVersion not set, kind not set]
      helm.go:84: [debug] error validating "": error validating data: [apiVersion not set, kind not set]
      
      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • How to curl elastic or kibana api for alerts?

      I am learning to use the ELK stack. Both kibana and elasticsearch are installed on my localhost. I just learnt how to install metricbeat and how to set up alerts. When setting up the alerts, I used the index connector, and called my index testconnector.

      I see the alerts showing up in my web browser when i go to http://localhost:5601/app/observability/alerts.

      Is there a way for me to get the same information via REST API? i tried all these endpoints but they all say "no handler found for uri"

      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/api/index_management/indices"
      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/api/alert"
      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/api/alert/_search"
      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/api/alert/_find"
      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/alert/_search"
      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/alert/_find"
      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/kibana/api/alerting"
      curl -X GET -k -u elasticuser:elasticpass "http://localhost:9200/testconnector/_search"
      

      If anyone can tell me how to get the alerts (not the rules) through a REST API, that will be great!

      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • RE: Rationale for using Docker to containerize applications

      I'm struggling to understand the rationale of using Docker where the environments are essentially the same, and the applications are relatively simple.

      In reality, it is highly unlikely that any development environment on any project would ever be anywhere near the same as staging/production.

      • Services running in staging/production will nearly always be physically hosted and managed somewhere which is not intended to be operated interactively by a human day-to-day, with an appropriate IT/security profile to match;
      • The nature of development work, and even internal build/testing typically requires a different IT profile to that of a production server.
      • Developers rarely have control over the underlying infrastructure or the organisation's IT/security policies.

      There are many ways in which the IT profile of developer environments, including build agents and even test machines, can deviate from production:

      • Users/permissions or other security settings.
      • Installed tools, SDKs, runtimes, debug/test tools, OS features/packages, and other dependencies operating with debug/test configurations enabled.
      • Environment variables
      • Filesystem structure and the content of files in globally shared directories
      • Configurations of globally-installed dependencies such as web servers.

      Furthermore, consider the nature of physical devices and VMs

      • They are stateful and mutable
      • Every change to any aspect of a device or VM, including installed software and configuration changes potentially affects its entire state for all processes running on it.
      • Physical devices and VMs typically run many processes/services concurrently, it would usually not be considered economical to have a whole server or VM just for a single running process.

      What containers provide:

      • Isolation from the host device/VM and from an organisation's IT, Network and Infrastructure policies.
      • Isolation from each other - for example, consider the issue of requiring multiple versions of globally-installed runtime dependencies or modifications to shared host resources such as environment variables or local files.
      • Developers typically have full control over their choices of container images and the networking/orchestration inside the container runtime.
      • Images are based on immutable layers, meaning it is not possible for the state of any layer in an image to change, so a published image should always be a good, known, valid starting point.
      • The size of a parent image tends to be inconsequential because there's typically no reason to duplicate nor to re-download it unless a new version of that parent image is published.
      • A container is its own thin, mutable layer on top of an image, usually negligible in size, and uses the parent image for all dependencies.
      • if a container ends up in an invalid state, it may be quickly and cheaply disposed and replaced with a fresh, clean new container almost instantly by recreating another fresh new container layer.
      • The cheap, light-weight nature of containers makes it very efficient to run a single process per-container.
      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • RE: Why does stripping executables in Docker add ridiculous layer memory overhead?

      Each directive in your Dockerfile adds another layer to the image. So anything you do -- removing files, stripping binaries, etc -- is only going to increase the size of the image.

      It looks like you're trying to overcome this issue by using a multi-stage build, but that's not doing you any good: those two COPY directives are introducing effectively the same changes introduced by the find command in the previous stage.

      The way to solve this is by discarding the old layers, generally by creating a new image that reflects the state of the top layer only. This is called "squashing" the image, and there are various ways of doing this. Here's one mechanism that works. For this example, I'm using this Dockerfile (based on your linked example) to build squashtest:base:

      FROM docker.io/parrotsec/core:base-lts-amd64
      

      RUN export DEBIAN_FRONTEND=noninteractive &&
      apt-get -q -y update --no-allow-insecure-repositories
      && apt-get -y upgrade --with-new-pkgs
      && apt-get -y install --no-install-recommends
      aria2=1.35.0-3
      apparmor=2.13.6-10
      apparmor-utils=2.13.6-10
      auditd=1:3.0-2
      curl
      debsums=3.0.2
      gawk=1:5.1.0-1
      git
      iprange=1.0.4+ds-2
      jq=1.6-2.1
      libdata-validate-domain-perl=0.10-1.1
      libdata-validate-ip-perl=0.30-1
      libnet-idn-encode-perl=2.500-1+b2
      libnet-libidn-perl=0.12.ds-3+b3
      libregexp-common-perl=2017060201-1
      libtext-trim-perl=1.04-1
      libtry-tiny-perl=0.30-1
      localepurge=0.7.3.10
      locales
      miller=5.10.0-1
      moreutils=0.65-1
      p7zip-full=16.02+dfsg-8
      pandoc=2.9.2.1-1+b1
      preload=0.6.4-5+b1
      python3-pip=20.3.4-4+deb11u1
      rkhunter=1.4.6-9
      symlinks=1.4-4
      && apt-get install -y --no-install-recommends --reinstall ca-certificates=*
      && apt-get -y autoremove
      && apt-get -y clean
      && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
      && rm -f /var/cache/ldconfig/aux-cache
      && find -P -O3 /var/log -depth -type f -print0 | xargs -0 truncate -s 0
      && localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
      && localepurge
      && symlinks -rd /
      && apt-get -y purge --auto-remove localepurge symlinks
      && find -P -O3 /etc/ /usr/ -type d -empty -delete

      1. Build the base image.

        docker build -t squashtest:base -f Dockerfile.base .
        

        This produces the following:

        $ docker image ls squashtest:base
        REPOSITORY   TAG       IMAGE ID       CREATED             SIZE
        squashtest   base      58dff2c40a28   About an hour ago   786MB
        
      2. Build a new image squashtest:stripped with stripped binaries using this Dockerfile:

        FROM squashtest:base
        

        RUN find -P -O3 /usr/bin/ /usr/local/bin
        -type f -not -name strip -and -not -name dbus-daemon
        -execdir strip -v --strip-unneeded '{}' ; || :

        Which produces:

        $ docker image ls squashtest:stripped
        REPOSITORY   TAG        IMAGE ID       CREATED             SIZE
        squashtest   stripped   42aa25ebc0c7   About an hour ago   997MB
        

        At this point, the image consists of the following layers:

        $ docker image inspect squashtest:stripped | jq '.[0].RootFS'
        {
          "Type": "layers",
          "Layers": [
            "sha256:7e203d602b1c20e9cf0b06b3dd3383eb36bc2b25f6e8064d9c81326dfdc67143",
            "sha256:1fc5866a0b6b7a23a246acfd46b4c513b4a188d2db2d8a26191989a4a18c74d3",
            "sha256:cc3a9d1a7f9222eee31b688d887c79745e20389ecfe0fe208349c73cfd172b4a"
          ]
        }
        
      3. We can collapse these into a single layer like this:

        docker run --rm squashtest:stripped \
          tar -C / -cf- --exclude=./dev --exclude=./sys \
          --exclude=./proc  . |
          docker import - squashtest:imported
        

        This produces:

        $ docker image ls squashtest:imported
        REPOSITORY   TAG        IMAGE ID       CREATED          SIZE
        squashtest   imported   6f036f16d477   46 seconds ago   626MB
        

        We've saved 160MB off the base image.

      There are other ways to squash a Docker image; there are a number of tools on GitHub ( https://github.com/goldmann/docker-squash , https://github.com/jwilder/docker-squash , https://github.com/qwertycody/Bash_Docker_Squash ) that accomplish something similar. docker build https://docs.docker.com/engine/reference/commandline/build/ if you enable experimental features, but that doesn't appear to accomplish much when I try it.

      I would argue that for the 160MB we've managed to save here the effort isn't worth it. Unless you're running Docker in an extremely constrained environment, that's going to be nothing but a drop in the bucket (for reference, that's about the size of /bin/ls).

      In fact, the strip operation in your Dockerfile is mostly pointless: distributions generally strip binaries by default; you can verify this by running file on all the binaries in /usr/bin and /bin on docker.io/parrotsec/core:base-lts-amd64:

      $ docker run -it --rm docker.io/parrotsec/core:base-lts-amd64 bash
      # apt -y install file
      # file /bin/* /usr/bin/* | grep ELF  | grep -v stripped
      

      That last command returns zero results: all the binaries have been stripped.

      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • RE: Does docker engine (not Desktop) support Linux containers on Windows 11?

      Using the following guide, I'm able to use docker on Windows without Docker Desktop. It may not be exactly true to my original question but meets my use case. https://dev.to/bowmanjd/install-docker-on-windows-wsl-without-docker-desktop-34m9

      Summarising the guide (in case it disappears from the internet):

      • Install WSL and check it's version 2: https://docs.microsoft.com/en-us/windows/wsl/install-win10#step-2---update-to-wsl-2
      • Install a Linux distribution: https://aka.ms/wslstore (I went with Ubuntu)
      • Add a non-root user (optional)
      • Upgrade any packages to the latest (e.g. on Ubuntu use sudo apt update && sudo apt upgrade
      • Install docker engine to your Linux distro: https://docs.docker.com/engine/install/
      • (Ubuntu) Switch over to using iptables sudo update-alternatives --config iptables
      • (Ubuntu) Add your user to the docker group sudo usermod -aG docker $USER
      • Start the docker daemon sudo service docker start
      • Run the hello-world container: docker run --rm hello-world

      I won't accept my own answer for now, in the hope that someone finds a better way

      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • Does Jenkins essentially function like a package manager for your software product?

      I'm a relatively new IT Ops guy in a software (web) development company. Recently I deployed a virtual machine on OpenStack, because some developer needs it, and then I installed their application (written by our developers, not third party application) on that newly deployed server using Jenkins.

      So basically, what I did was to install an application automatically on a server using Jenkins. This feels like installing a software on a Linux PC using a package manager like APT in Ubuntu, where everything is handled automatically by the package manager.

      So, is the purpose of Jenkins to function like some automatic software installer? Is Jenkins essentially a package manager?

      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • known_hosts module reports changed when nothing has changed

      Why does my task report say that it has changed when nothing has changed?

      - hosts: localhost
        become: false
        gather_facts: false
        tasks:
          - name: Remove non existing host key from known_hosts file
            known_hosts:
              name: 192.168.122.230
              key: 192.168.122.230 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCbi2hyrvpTRKC37NOm46n4zCPBb9r6cKk8BPrN2eppFu/0PJlB4D+nRI5epHzs5+1LhPkppbOGLC2VIRl3phMDQci3RIszhEZx4lAyX/HAkM+2zdNJlH2WWs3cbw804B4iqfCvy/Gch5ZXl4qEfpVqMURCr/XjaMQETzbjbfgOoyYxw8M/5Kq8VQy+DzqxNNzPi4ElcFQztxxrKDFPwuDplFdxw3YK+iQ4JHxlLWSfgtwsFhg7Z7uM8/efP7ocB23i2GmmG67yM/T/8uAld9t73V8icfe9WnRk2WVY69p4TzC3tMl2KmUDVm5AwvH+FNm/67E9t2inWHgKZacdOaOrgJ7SimPz0ILYDKd4hXg4whz3vdp21M/acjX3jA+fiwx6/GDIofKhyWOP3SwaiprqHZb+rWxerIOZx1IeuIRDZBH5Hjz7UlE5yg1xnqPXXzrFMj9rsKp9S5VB3HGGDfuOU7VymhZiTHIAuGM+weV6r2cOjn5HgdqkU6ABuchMAJvzaj9a3E07Rzk6h/lgWfy5VT/yl7DA7sM0/YSqKPJKgxbstoaOAZl35SDxAx978T0xlomIxaJUehRefK+G1GgPeLMmk0QtpX1dMH8bD4qvKGoLQG1qeJ4W4HrnoTsGLCxsN5/ek3rnqCekYOSiJ/q9+sZyhcLN1hwrDrrFK5fRUw==
              state: absent
            register: reg_known_hosts
      
      - name: Show known_host register
        debug:
          var: reg_known_hosts
      

      TASK [Remove non existing host key from known_hosts file] ***************************************
      changed: [localhost]
      
      TASK [Show known_hosts register] ****************************************************************
      ok: [localhost] => {
          "reg_known_hosts": {
              "changed": true,
              "diff": {
                  "after": "192.168.122.230 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIzlnSq5ESxLgW0avvPk3j7zLV59hcAPkxrMNdnZMKP2\n",
                  "after_header": "/home/sxkx/.ssh/known_hosts",
                  "before": "192.168.122.230 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIzlnSq5ESxLgW0avvPk3j7zLV59hcAPkxrMNdnZMKP2\n",
                  "before_header": "/home/sxkx/.ssh/known_hosts"
              },
              "failed": false,
              "gid": 1000,
              "group": "sxkx",
              "hash_host": false,
              "key": "192.168.122.230 ssh-rsa ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCbi2hyrvpTRKC37NOm46n4zCPBb9r6cKk8BPrN2eppFu/0PJlB4D+nRI5epHzs5+1LhPkppbOGLC2VIRl3phMDQci3RIszhEZx4lAyX/HAkM+2zdNJlH2WWs3cbw804B4iqfCvy/Gch5ZXl4qEfpVqMURCr/XjaMQETzbjbfgOoyYxw8M/5Kq8VQy+DzqxNNzPi4ElcFQztxxrKDFPwuDplFdxw3YK+iQ4JHxlLWSfgtwsFhg7Z7uM8/efP7ocB23i2GmmG67yM/T/8uAld9t73V8icfe9WnRk2WVY69p4TzC3tMl2KmUDVm5AwvH+FNm/67E9t2inWHgKZacdOaOrgJ7SimPz0ILYDKd4hXg4whz3vdp21M/acjX3jA+fiwx6/GDIofKhyWOP3SwaiprqHZb+rWxerIOZx1IeuIRDZBH5Hjz7UlE5yg1xnqPXXzrFMj9rsKp9S5VB3HGGDfuOU7VymhZiTHIAuGM+weV6r2cOjn5HgdqkU6ABuchMAJvzaj9a3E07Rzk6h/lgWfy5VT/yl7DA7sM0/YSqKPJKgxbstoaOAZl35SDxAx978T0xlomIxaJUehRefK+G1GgPeLMmk0QtpX1dMH8bD4qvKGoLQG1qeJ4W4HrnoTsGLCxsN5/ek3rnqCekYOSiJ/q9+sZyhcLN1hwrDrrFK5fRUw==",
              "mode": "0600",
              "name": "192.168.122.230",
              "owner": "sxkx",
              "path": "/home/sxkx/.ssh/known_hosts",
              "size": 97,
              "state": "file",
              "uid": 1000
          }
      }
      

      When looking at the register it says in the diff property that before and after are the same, yet it reports that it made changes?

      Something else I found out is that if I completely empty my .ssh/known_hosts file and run the playbook it will say ok: [localhost].

      What I'm thinking is, is that when I specify a key to remove it will look for all the keys belonging to a particular host, making sure it does not contain the key in question but the before/after still holds a key (the ed25519 key) and then marks it as changed? It looks like when removing a key, the known_hosts module is expecting that all keys will be removed.

      I can work around it by using the changed_when property but I rather understand why Ansible is saying it changed when it did not change at all.

      This will work around the issue I'm having:

      - name: Remove non existing host key from known_hosts file
        known_hosts:
          name: 192.168.122.230
          key: 192.168.122.230 ssh-rsa ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCbi2hyrvpTRKC37NOm46n4zCPBb9r6cKk8BPrN2eppFu/0PJlB4D+nRI5epHzs5+1LhPkppbOGLC2VIRl3phMDQci3RIszhEZx4lAyX/HAkM+2zdNJlH2WWs3cbw804B4iqfCvy/Gch5ZXl4qEfpVqMURCr/XjaMQETzbjbfgOoyYxw8M/5Kq8VQy+DzqxNNzPi4ElcFQztxxrKDFPwuDplFdxw3YK+iQ4JHxlLWSfgtwsFhg7Z7uM8/efP7ocB23i2GmmG67yM/T/8uAld9t73V8icfe9WnRk2WVY69p4TzC3tMl2KmUDVm5AwvH+FNm/67E9t2inWHgKZacdOaOrgJ7SimPz0ILYDKd4hXg4whz3vdp21M/acjX3jA+fiwx6/GDIofKhyWOP3SwaiprqHZb+rWxerIOZx1IeuIRDZBH5Hjz7UlE5yg1xnqPXXzrFMj9rsKp9S5VB3HGGDfuOU7VymhZiTHIAuGM+weV6r2cOjn5HgdqkU6ABuchMAJvzaj9a3E07Rzk6h/lgWfy5VT/yl7DA7sM0/YSqKPJKgxbstoaOAZl35SDxAx978T0xlomIxaJUehRefK+G1GgPeLMmk0QtpX1dMH8bD4qvKGoLQG1qeJ4W4HrnoTsGLCxsN5/ek3rnqCekYOSiJ/q9+sZyhcLN1hwrDrrFK5fRUw==
          state: absent
        register: reg_known_hosts
        changed_when: reg_known_hosts.diff.before != reg_known_hosts.diff.after
      

      Ansible version: ansible [core 2.13.2]

      What is going on here?

      UPDATE - 18-Aug-2022

      I will start by saying that I'm new to Ansible and I'm not a Python programmer.

      When reading the source code for the known_hosts module I found the following.

      Line: 228

      def search_for_host_key(module, host, key, path, sshkeygen):
        '''search_for_host_key(module,host,key,path,sshkeygen) -> (found,replace_or_add,found_line)
        Looks up host and keytype in the known_hosts file path; if it's there, looks to see
        if one of those entries matches key. Returns:
        found (Boolean): is host found in path?
        replace_or_add (Boolean): is the key in path different to that supplied by user?
        found_line (int or None): the line where a key of the same type was found
        if found=False, then replace is always False.
        sshkeygen is the path to ssh-keygen, found earlier with get_bin_path
        '''
      # ...
      

      Looking at the comment at the top of the function I can conclude that:

      • found will equal True because I have a key for that host (key of type ed25519).
      • replace_or_add will equal True because the key is different from the one found (key of type ed25519).
      • found_line will be None because no key of the same type was found.

      With that in mind I think we can have a look at the enforce_state function.

      Line: 117

      def enforce_state(module, params):
      

      ...

      Next, add a new (or replacing) entry

      if replace_or_add or found != (state == "present"):
      try:
      inf = open(path, "r")
      except IOError as e:
      if e.errno == errno.ENOENT:
      inf = None
      else:
      module.fail_json(msg="Failed to read %s: %s" % (path, str(e)))
      try:
      with tempfile.NamedTemporaryFile(mode='w+', dir=os.path.dirname(path), delete=False) as outf:
      if inf is not None:
      for line_number, line in enumerate(inf):
      if found_line == (line_number + 1) and (replace_or_add or state == 'absent'):
      continue # skip this line to replace its key
      outf.write(line)
      inf.close()
      if state == 'present':
      outf.write(key)
      except (IOError, OSError) as e:
      module.fail_json(msg="Failed to write to file %s: %s" % (path, to_native(e)))
      else:
      module.atomic_move(outf.name, path)

      params['changed'] = True
      

      I think the culprit lies within this if block. For what I could make up from the comment in the previous function; replace_or_add is True, found is True and not equal to state == "present". Within this if block it will read the known_hosts file and loops over the lines and when a line (number) matches the found_line it will continue the loop, otherwise it will write that line to (a temporary) file. However no matter what it does within the loop, later on in the if block it will always set params['changed'] = True, meaning it will always report changed regardless of there actually being changes. A possible solution to this could be a counter to keep track of the number of times the loop was continued and then set the params variable property like so: params['changed'] = True if counter > 0 else False.

      Something else I think is happening (regardless of changes) is that it will perform a write operation. If so then yes it would make some sense (would it?) that params['changed'] is set to True but I would much rather see the module only write out to .ssh/known_hosts if something actually changed.

      UPDATE - 19-Aug-2022

      This portion is not related to the state always being changed but rather something to do with the hash_host option in the known_hosts module, either your hostname or IP is lost and not set in the known_hosts file.

      Let's assume the following entry is in my known_hosts file.

      host.local,192.168.122.230 ssh-rsa 
      

      When hashing the known_hosts file with ssh-keygen -H it represents the line above like so.

      |1|| ssh-rsa 
      |1|| ssh-rsa 
      

      Where the first line is for host.local and the second line is for 192.168.122.230.

      When attempting the same thing with the known_hosts module in Ansible something different will happen.

      - name: add host key to known_hosts file
        known_hosts:
          name: host.local
          key: host.local,192.168.122.230 ssh-rsa 
          state: present
      

      Will result in the following entry in the known_hosts file.

      host.local,192.168.122.230 ssh-rsa 
      

      However when you enable hashing things will change.

      - name: add host key to known_hosts file
        known_hosts:
          hash_host: yes
          name: host.local
          key: host.local,192.168.122.230 ssh-rsa 
          state: present
      
      |1|| ssh-rsa 
      

      It will have only hashed the entry for host.local the IP version of that line is just gone. This is something to keep in mind when you want to use both the hostname and IP to access the target.

      UPDATE: 24-Aug-2022

      Opened an https://github.com/ansible/ansible/issues/78598 over on the Ansible github.

      If you find any information to be incorrect please let me know or edit this post.

      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • RE: What is the best way to reverse port forward information from a Kubernetes cluster to localhost?

      Expose SSH on the pod and use SSH to do the reverse port forward

      or

      Use a service on the cluster pointing at your machine but that assumes the networking is connected some how

      Those 2 anwsers were taken from this duplicate question. https://stackoverflow.com/questions/66666273/forward-traffic-from-kubernetes-pod-to-local-server

      I think a third better approach might be to use somekinda mesh VPN like https://tailscale.com/

      posted in Continuous Integration and Delivery (CI
      A
      Anderson
    • RE: Setting up gitlab phpstan pipeline

      The problem is that an official Docker image of phpstan uses ENTRYPOINT ["phpstan"] and the Gitlab runner call the image with "sh -c" command. You can override the default entry point ( https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#overriding-the-entrypoint-of-an-image 😞

      phpstan:
        stage: check
        image: 
          name: ghcr.io/phpstan/phpstan
          entrypoint: [""]
        script:
          - phpstan analyse
      

      I recommend to use a Docker image for phpstan with concrete version (don't use latest tag).

      posted in Continuous Integration and Delivery (CI
      A
      Anderson