Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Burma
    B
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Burma

    @Burma

    0
    Reputation
    29909
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Burma Follow

    Best posts made by Burma

    This user hasn't posted anything yet.

    Latest posts made by Burma

    • Docker containers are being restarted after logging in via SSH

      Docker is installed on my Raspberry Pi with Raspberry OS and running it in rootless mode. I currently have 3 docker containers running on it. The containers are running with docker compose.

      The problem is that after I login via SSH, all containers restart. This is also noticeable because the login is taking like 5-10 seconds until the prompt is visible.

      What is also odd is that this does not happen consequently. I can reproduce it if there is like 10 seconds in between login attempts.

      What I also notice is that when I exit the terminal with CTRL + D, all docker containers stop. They start again after logging in via SSH on the PI.

      Docker version: Docker-ce: 5:20.10.21~3-0~debian-bullseye

      All applications have a similar docker-compose file. One example is below:

      version: '3'
      services:
        pihole:
          image: pihole/pihole:latest
          ports:
            - "53:53/tcp"
            - "53:53/udp"
            - "67:67/udp"
            - "9080:80/tcp"
            - "9443:443/tcp"
          environment:
            TZ: 'Europe/Amsterdam'
            # WEBPASSWORD: 'set a secure password here or it will be random'
          # Volumes store your data between container upgrades
          volumes:
             - './etc-pihole/:/etc/pihole/'
             - './etc-dnsmasq.d/:/etc/dnsmasq.d/'
          dns:
            - 127.0.0.1
            - 1.1.1.1
          # Recommended but not required (DHCP needs NET_ADMIN)
          #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
          cap_add:
            - NET_ADMIN
          restart: on-failure:3
      

      After logging in:

      pi@raspberrypi:~ $ docker ps
      CONTAINER ID   IMAGE                          COMMAND                  CREATED          STATUS                            PORTS                                                                                                                                                                                                                                                                                                                                                                                                   NAMES
      bd2350eaea48   linuxserver/unifi-controller   "/init"                  25 minutes ago   Up 3 seconds                      0.0.0.0:1900->1900/udp, :::1900->1900/udp, 0.0.0.0:5514->5514/tcp, :::5514->5514/tcp, 0.0.0.0:6789->6789/tcp, :::6789->6789/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp, 0.0.0.0:8843->8843/tcp, :::8843->8843/tcp, 0.0.0.0:3478->3478/udp, :::3478->3478/udp, 0.0.0.0:10001->10001/udp, :::10001->10001/udp, 0.0.0.0:8880->8880/tcp, :::8880->8880/tcp   unifi-controller-unifi-controller-1
      b6b1733befc6   pihole/pihole:latest           "/s6-init"               25 minutes ago   Up 3 seconds (health: starting)   0.0.0.0:53->53/udp, :::53->53/udp, 0.0.0.0:53->53/tcp, 0.0.0.0:67->67/udp, :::53->53/tcp, :::67->67/udp, 0.0.0.0:9080->80/tcp, :::9080->80/tcp, 0.0.0.0:9443->443/tcp, :::9443->443/tcp                                                                                                                                                                                                                 pihole-pihole-1
      47c751c5912b   nginx                          "/docker-entrypoint.…"   30 minutes ago   Up 3 seconds                      0.0.0.0:80->80/tcp, :::80->80/tcp                                                                                                                                                                                                                                                                                                                                                                       nginx-web-1
      pi@raspberrypi:~ $ uptime
       20:15:56 up 1 day, 57 min,  1 user,  load average: 1.34, 1.13, 0.92
      pi@raspberrypi:~ $
      

      systemctl status docker

      pi@raspberrypi:~ $ systemctl status docker
      ● docker.service - Docker Application Container Engine
           Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
           Active: active (running) since Fri 2022-11-18 19:18:10 GMT; 24h ago
      TriggeredBy: ● docker.socket
             Docs: https://docs.docker.com
         Main PID: 667 (dockerd)
            Tasks: 10
              CPU: 39.118s
           CGroup: /system.slice/docker.service
                   └─667 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
      

      Nov 18 19:18:07 raspberrypi dockerd[667]: time="2022-11-18T19:18:07.718884305Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
      Nov 18 19:18:08 raspberrypi dockerd[667]: time="2022-11-18T19:18:08.414555267Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
      Nov 18 19:18:08 raspberrypi dockerd[667]: time="2022-11-18T19:18:08.443483656Z" level=warning msg="Unable to find memory controller"
      Nov 18 19:18:08 raspberrypi dockerd[667]: time="2022-11-18T19:18:08.444127749Z" level=info msg="Loading containers: start."
      Nov 18 19:18:09 raspberrypi dockerd[667]: time="2022-11-18T19:18:09.048346693Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a pre>
      Nov 18 19:18:09 raspberrypi dockerd[667]: time="2022-11-18T19:18:09.263348211Z" level=info msg="Loading containers: done."
      Nov 18 19:18:09 raspberrypi dockerd[667]: time="2022-11-18T19:18:09.951108267Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
      Nov 18 19:18:09 raspberrypi dockerd[667]: time="2022-11-18T19:18:09.952013081Z" level=info msg="Daemon has completed initialization"
      Nov 18 19:18:10 raspberrypi systemd[1]: Started Docker Application Container Engine.
      Nov 18 19:18:10 raspberrypi dockerd[667]: time="2022-11-18T19:18:10.045252137Z" level=info msg="API listen on /run/docker.sock"

      Why does this happen?

      posted in Continuous Integration and Delivery (CI
      B
      Burma
    • Azure KQL query to display list of VMs which were not patch for since 1 months

      Is there a KQL query to get a list of VMs which are not patched since last month. Below is the sample I have

      Update
      | where Classification in ("Security Updates", "Critical Updates")
      | where UpdateState == 'Needed' and Optional == false and Approved == true
      | summarize count() by Classification, Computer, _ResourceId
      // This query requires the Security or Update solutions
      

      How to apply the filter to just show the ones which are due since one month

      posted in Continuous Integration and Delivery (CI
      B
      Burma
    • RE: Ansible: How to run ad-hoc command with multiple environnements?

      One of the options would be to change the current directory. This simplifies the structure and lets you keep common configuration and inventory for production and staging. For example, to run the commands,

      • either change the working directory for all commands
      shell> cd staging
      shell> ansible-playbook playbook.yml
      shell> ansible mhost1 -m ping
      
      • , or change the working directory for each command
      shell> (cd staging; ansible-playbook playbook.yml)
      shell> (cd staging; ansible mhost1 -m ping)
      

      Make the changes below

      • Use inventory file from the current directory. Put into the ansible.cfg
      inventory=$PWD/hosts
      
      • Link ansible.cfg to both production/ansible.cfg and staging/ansible.cfg

      • Link hosts to both production/hosts and staging/hosts

      • You can rename both production_playbook.yml and staging_playbook.yml to playbook.yml

      ├── ansible.cfg
      ├── hosts
      ├── production
      │   ├── group_vars
      │   │   ├── all.yml
      │   │   ├── mygroup.yml
      │   │   └── mygroup2.yml
      │   ├── host_vars
      │   │   ├── mhost1.yml
      │   │   └── mhost2.yml
      |   ├── ansible.cfg -> ../ansible.cfg
      │   ├── hosts -> ../hosts
      │   └── playbook.yml
      └── staging
          ├── group_vars
          │   ├── all.yml
          │   ├── mygroup.yml
          │   └── mygroup2.yml
          ├── host_vars
          │   ├── mhost1.yml
          │   └── mhost2.yml
          ├── ansible.cfg -> ../ansible.cfg
          ├── hosts -> ../hosts
          └── playbook.yml
      

      WARNING

      In a critical environment separate production and staging physically.

      posted in Continuous Integration and Delivery (CI
      B
      Burma
    • connect to kind cluster from inside and outside

      Consider this:

      There's a linux machine running on azure.

      It has apache installed and browing to mymachine.com works, webpage is displayed. All ports are opened in the azure portal "networking settings".

      Now there's a k8s kind cluster running on the machine too with a rabbitmq deployment as NodePort service port 32254.

      kubectl get svc
      
      • kubernetes ClusterIP 10.96.0.1 443/TCP 3h17m
      • ng ClusterIP 10.96.104.61 80/TCP 68m
      • rabbitmq NodePort 10.96.8.15 5672:32740/TCP,15672:32254/TCP,15692:31802/TCP 143m
      • rabbitmq-nodes ClusterIP None 4369/TCP,25672/TCP 143m

      All good, but I'm unable to access the rabbitmq cluster from outside.

      The ip address of the control plane is 172.18.0.3, so this command works:

      curl http://172.18.0.3:32254
      

      But connecting from outside does not work:

      curl http://mymachine.com:32254 
      

      So how do we forward from the public-ip of a VM in azure to an internal IP inside the VM?

      posted in Continuous Integration and Delivery (CI
      B
      Burma
    • RE: Terraform: Why is null_resource's remote-exec not connecting to aws_instance via SSH?

      You need to change remote-exec syntax a bit. First, establish a connection to the remote server then execute with the provisioner as:

          resource "null_resource" "instance" {
            connection  {
              type = "ssh"
              host = aws_instance.instance.public_ip
              # variable's default value: "ubuntu", Ubuntu AMI's default system user account
              user = var.aws_instance_user_name
              # variable's default value: "~/.ssh/id_rsa"
              # the path to the public key provided to aws_key_pair.key_pair
              private_key = file(var.aws_key_pair_private_path)
              timeout = "20s"
            }
            provisioner "remote-exec" {
              inline = ["echo 'remote-exec message'"]
            }
            provisioner "local-exec" {
              command = "echo 'local-exec message'"
            }
          }
      

      This should fix your problem. Thank you!

      posted in Continuous Integration and Delivery (CI
      B
      Burma
    • RE: Read and parse json file from workspace on slave node

      Following is the code that worked for me, if there is any other better way to do this, suggestions are most welcome.

      Main Method:

      def transformjsonfile(propertiesPath, jsonfilepath) {
      

      // Load Properties file
      Properties props = new Properties()
      props.load(new StringReader(readFile(propertiesPath)))

      // Load json file
      def json = jsonParseFile(jsonfilepath)

      // Transform json
      json = transformValues(props, json)

      // Write back the tranformed config to the json file
      formatAndWriteBackJson(json)
      }

      Helper methods:

      def jsonParseFile(def jsonfilepath) {
          def fileContent = readFile(file: jsonfilepath, encoding: "UTF-8")
          def jsonSlurper = new JsonSlurperClassic()
          def data = jsonSlurper.parseText(fileContent.replaceAll("\\s", "").replaceAll("\uFEFF", ""))
          return data
      }
      

      def formatAndWriteBackJson(json)
      {
      def jsonOut = new JsonBuilder(json).toPrettyString()
      writeFile file: jsonfilepath, text: jsonOut.toString()
      }

      def transformValues(props, json) {
      for (String name: props.keySet()) {
      String givenValue = props.getProperty(name);
      String value = givenValue.trim();
      if (name.startsWith("json_${env.Environment}")) {
      splitarray = name.split("json
      ${env.Environment}_")
      jsonvalue = splitarray[1]
      key = "${jsonvalue}"
      transformvalue = "${value}"
      def someString = "${key}"
      def someChar = '.'
      int count = 0;
      for (int i = 0; i < someString.length(); i++) {
      if (someString.charAt(i) == someChar) {
      count++;
      }
      }
      keyvalue = "json"
      splitarray = key.split('\.')
      lab1: for (int j = 0; j <= count; j++) {
      keyvalue = keyvalue + "." + splitarray[j]
      jsonvalue = Eval.me('json', json, "${keyvalue}")
      if (jsonvalue == null) {
      break lab1;
      }
      if (jsonvalue != null) {
      println 'condition met'
      if (j == count) {
      Eval.me('json', json, "${keyvalue} ='$transformvalue'")
      }
      }
      }
      }
      }
      return json
      }

      posted in Continuous Integration and Delivery (CI
      B
      Burma
    • RE: Best practice put nginx + django in AWS subnet private?public?

      You would put it ideally into a private subnet behind an AWS Load balancer of some sort. The benefit of this is that any security issues in your nginx installation are offloaded to AWS, therefore are no longer your concern.

      Some detractions:

      • slightly more complex setup
      • minimum cost of ~25USD per month for load balancer (minimum cost of around 25USD per 25 client certificates if you terminate them on Application Load Balancer)

      People would access the nginx servers via a jump/ bastion hosts. Access to this host would probably be limited by IP address and/or VPN.

      The AWS provided NAT gateway does not allow inbound traffic, unless it was initiated by outbound traffic.

      posted in Continuous Integration and Delivery (CI
      B
      Burma
    • RE: Minecraft 1.19.2 Bug or a Misunderstanding of Command Blocks?

      When command blocks are set to "require redstone", they will only be able to be activated when powered by redstone. Being powered by redstone does not always mean 'run the command'!

      In the case of chain (aqua-marine) command blocks, they will only allow or not commands to be ran, and the commands will be ran if another command block facinig it gets activated. For the orange and purple (repeat and impulse) they will run the command if powered by redstone.

      In OP's sitatuion, A1, A2, B1 and B2 are set to "Require redstone", but B2 is being indirectly powered and A2 isn't, which causes the weird behaviour of A1 "not powering" A2. If there was an B3 (chain and require redstone), it wouldn't be powered just like A2.

      posted in Game Testing
      B
      Burma
    • RE: What is this RTS that seems to be based on the real world, with modern military units?

      Your screenshot also looks like https://store.steampowered.com/app/251060/Wargame_Red_Dragon/ .

      A similar video is seen here, with a visually identical map overview seen at the 2:15 mark.

      posted in Game Testing
      B
      Burma
    • RE: How to find Bretta?
      • Make sure you have the air dash, double jump and wall climb abilities. The double jump is optional but makes things much easier.

      • Go to the very bottom right room of the Fungal Wastes, where there is an entrance to the Royal Waterways. Do not enter the waterways.

      • Look at the map. There is an icon at the bottom of the room you are in. It shows a small statue. Go to the statue. This is where the Dashmaster Charm can be found if you haven't picked it up already.

      • Walk through the wall to the left of the statue. If you can't walk through it directly, try attacking it first.

      • Go as far left as you can in the room you just uncovered and climb the wall up as high as you can. There is a hole in the roof that you can get into with some tricky jumping and dashing. If you have the double jump, this is much easier.

      • Follow the linear path after this. There will be a lot of spikes to avoid.

      • Bretta is sitting at the end.

      posted in Game Testing
      B
      Burma