Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. rosemadder
    R
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    rosemadder

    @rosemadder

    0
    Reputation
    29775
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    rosemadder Follow

    Best posts made by rosemadder

    This user hasn't posted anything yet.

    Latest posts made by rosemadder

    • RE: Docker Compose: How do you build an image while running another container?

      You could try using the --profile option during the build.

      version: "3"
      services:
        strapi:
          profiles: ["strapi"]
          build:
            context: ./
            dockerfile: strapi/Dockerfile
          container_name: "strapi"
          restart: always
          ports:
            - "3000:3000"
      

      nuxt:
      build:
      profiles: ["nuxt"]
      context: ./
      dockerfile: nuxt/Dockerfile
      image: nuxt:latest
      container_name: "nuxt"
      restart: always
      ports:
      - "3000:3000"

      Then pass the profile switch during the build:

      docker compose --profile nuxt build

      Please refer to the official documentation for more information:

      https://docs.docker.com/compose/profiles/

      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • RE: Minimum laptop requirements for devops learning

      It'll probably really depend on your budget. 8GB RAM seems like very little, as bahrep suggested I would first try to see if your laptop can get a RAM upgrade.

      If you can afford it you could also do your experiments in the cloud. It depends on what you want to learn (DevOps is a large field) but chances are you'll have to get acquainted with the cloud. If your current hardware does not allow you to run multiple infrastructures at once I would look into VM/VPS purchasing: you'll get the resources you need and get to learn about major cloud providers in the meantime.

      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • RE: How to hide/mask credentials stored at terraform state file

      The tfstate file can be thought of as your "executable". So no, you cannot hide/remove sensitive values from it.

      What you can do, however, is to store it safely. Terraform offers https://developer.hashicorp.com/terraform/language/settings/backends/s3 on how to configure your backend to store the tfstate (because this file must never make it to your Git repository). Usually it consists of an S3 bucket (to store the contents) and a DynamoDB table (for version management).

      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • RE: Newly installed k3s cluster on fresh OS install can not resolve external domains or connect to external resources?

      My problem was basically that I had multiple default routes, from ip route

      default via 172.16.42.1 dev ens5 proto dhcp src 172.16.42.135 metric 100 
      default via 172.16.42.1 dev ens3 proto dhcp src 172.16.42.95 metric 100 
      default via 10.2.64.1 dev ens4 proto dhcp src 10.2.67.51 metric 100 
      10.2.64.0/19 dev ens4 proto kernel scope link src 10.2.67.51 
      169.254.169.254 via 172.16.42.2 dev ens5 proto dhcp src 172.16.42.135 metric 100 
      169.254.169.254 via 172.16.42.2 dev ens3 proto dhcp src 172.16.42.95 metric 100 
      169.254.169.254 via 10.2.64.11 dev ens4 proto dhcp src 10.2.67.51 metric 100 
      172.16.42.0/24 dev ens5 proto kernel scope link src 172.16.42.135 
      172.16.42.0/24 dev ens3 proto kernel scope link src 172.16.42.95
      

      The cause of this was not using no_gateway = true in my Terraform stanza,

      resource "openstack_networking_subnet_v2" "subnet_project" {
        name       = "subnet_project"
        network_id = openstack_networking_network_v2.net_project.id
        cidr       = "172.16.42.0/24"
        ip_version = 4
      }
      

      Without no_gateway = true. I can fix this by lowering the metric on the default route on the host,

      sudo ip route replace default via 10.2.64.1 dev ens4 metric 90
      

      Which will add a new route to this,

      default via 10.2.64.1 dev ens4 metric 90
      

      Now running nslookup google.com as in the question will work fine, I can re-break it with

      sudo ip route del default via 10.2.64.1 dev ens4 metric 90
      

      Other diagnostics

      These are taken before bringing up k3s. My ip -o addr shows,

      1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
      1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
      2: ens3    inet 172.16.42.95/24 brd 172.16.42.255 scope global dynamic ens3\       valid_lft 40534sec preferred_lft 40534sec
      2: ens3    inet6 fe80::f816:3eff:fecd:6722/64 scope link \       valid_lft forever preferred_lft forever
      3: ens4    inet 10.2.67.51/19 brd 10.2.95.255 scope global dynamic ens4\       valid_lft 24333sec preferred_lft 24333sec
      3: ens4    inet6 2620:0:28a4:4140:f816:3eff:fed2:72c7/64 scope global dynamic mngtmpaddr noprefixroute \       valid_lft 2591979sec preferred_lft 604779sec
      3: ens4    inet6 fe80::f816:3eff:fed2:72c7/64 scope link \       valid_lft forever preferred_lft forever
      4: ens5    inet 172.16.42.135/24 brd 172.16.42.255 scope global dynamic ens5\       valid_lft 40534sec preferred_lft 40534sec
      4: ens5    inet6 fe80::f816:3eff:fe3d:cde8/64 scope link \       valid_lft forever preferred_lft forever
      

      And ip link shows

      1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      2: ens3:  mtu 1492 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
          link/ether fa:16:3e:cd:67:22 brd ff:ff:ff:ff:ff:ff
      3: ens4:  mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
          link/ether fa:16:3e:d2:72:c7 brd ff:ff:ff:ff:ff:ff
      4: ens5:  mtu 1492 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
          link/ether fa:16:3e:3d:cd:e8 brd ff:ff:ff:ff:ff:ff
      
      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • KubeApps: Invalid GetAvailablePackageSummaries response from the plugin helm.packages: ... Unable to fetch chart categories

      I followed the https://devops.stackexchange.com/a/16109/18965 . I'm getting this error when I boot into KubeApps

      An error occurred while fetching the catalog: Invalid GetAvailablePackageSummaries response from the plugin helm.packages: rpc error: code = Internal desc = Unable to fetch chart categories: pq: relation "charts" does not exist.

      How can I resolve this error?

      Picture of the Error

      When I pull the logs, I see,

      I0608 21:01:12.935502 1 root.go:32] asset-syncer has been configured with: server.Config{DatabaseURL:"kubeapps-postgresql:5432", DatabaseName:"assets", DatabaseUser:"postgres", DatabasePassword:"E0mn56sa5d", Debug:false, Namespace:"kubeapps", OciRepositories:[]string{}, TlsInsecureSkipVerify:false, FilterRules:"", PassCredentials:false, UserAgent:"asset-syncer/2.4.5 (kubeapps/2.4.5)", UserAgentComment:"kubeapps/2.4.5", GlobalReposNamespace:"kubeapps", KubeappsNamespace:"", AuthorizationHeader:"", DockerConfigJson:""}

      Followed by,

      Usage:
        asset-syncer sync [REPO NAME] [REPO URL] [REPO TYPE] [flags]
      

      Flags:
      -h, --help help for sync
      --oci-repositories strings List of OCI Repositories in case the type is OCI
      --version version for sync

      Global Flags:
      --add_dir_header If true, adds the file directory to the header of the log messages
      --alsologtostderr log to standard error as well as files
      --database-name string Name of the database to use (default "charts")
      --database-url string Database URL (default "localhost:5432")
      --database-user string Database user
      --debug verbose logging
      --filter-rules string JSON blob with the rules to filter assets
      --global-repos-namespace string Namespace for global repos (default "kubeapps")
      --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string If non-empty, write log files in this directory
      --log_file string If non-empty, use this log file
      --log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr log to standard error instead of files (default true)
      --namespace string Namespace of the repository being synced
      --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level)
      --pass-credentials pass credentials to all domains
      --skip_headers If true, avoid header prefixes in the log messages
      --skip_log_headers If true, avoid headers when opening log files
      --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
      --tls-insecure-skip-verify Skip TLS verification
      --user-agent-comment string UserAgent comment used during outbound requests
      -v, --v Level number for the log level verbosity (default 3)
      --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging

      As well as this errors,

      Error: Error: Get https://charts.bitnami.com/bitnami/index.yaml: dial tcp: lookup charts.bitnami.com on 10.43.0.10:53: server misbehaving


      I filed a bug upstream, but I'm not sure if this is a bug or a misconfiguration:

      • https://github.com/vmware-tanzu/kubeapps/issues/4882#issue-1265184794
      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • RE: In Ansible, how do I assign a hostvar to a playbook's host?

      To be evaluated, a variable must be closed in https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#when-to-quote-variables-a-yaml-gotcha , e.g.

      - hosts: "{{ hostvars.localhost.aws_instance_host_group }}"
      

      The https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#referencing-key-value-dictionary-variables simplifies the code.


      https://stackoverflow.com/help/minimal-reproducible-example

      shell> cat pb.yml
      ---
      - hosts: localhost
        gather_facts: false
        tasks:
          - set_fact:
              aws_instance_host_group: aws_instance
          - add_host:
              name: 10.1.0.61
              groups: "{{ aws_instance_host_group }}"
      
      • hosts: "{{ hostvars.localhost.aws_instance_host_group }}"
        gather_facts: false
        tasks:
        • debug:
          var: groups
        • debug:
          var: inventory_hostname
      shell> cat hosts
      localhost
      
      shell> ansible-playbook -i hosts pb.yml 
      
      PLAY [localhost] *****************************************************************************
      
      TASK [set_fact] ******************************************************************************
      ok: [localhost]
      
      TASK [add_host] ******************************************************************************
      changed: [localhost]
      
      PLAY [aws_instance] **************************************************************************
      
      TASK [debug] *********************************************************************************
      ok: [10.1.0.61] => 
        groups:
          all:
          - 10.1.0.61
          - localhost
          aws_instance:
          - 10.1.0.61
          ungrouped:
          - localhost
      
      TASK [debug] *********************************************************************************
      ok: [10.1.0.61] => 
        inventory_hostname: 10.1.0.61
      
      PLAY RECAP ***********************************************************************************
      10.1.0.61: ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      localhost: ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
      
      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • Why pods are started for old ReplicaSet

      A new deployment has been created and the release was successfully deployed on our AKS cluster.

      We have noticed in logs that pods, for old ReplicaSet(which still exists on the cluster), are regularly executed. This is actually happening only for one specific ReplicaSet. The reason we have noticed it - it tries to perform a database update for an old db version.

      Any idea why this may happen?

      UPDATE: it turned out that we run "old" pod on a system test cluster (unfortunately connection string was set incorrectly 😞 ). The misleading thing was that ReplicaSet have the same name... because

      Notice that the name of the ReplicaSet is always 
      formatted as [DEPLOYMENT-NAME]-[RANDOM-STRING]. 
      The random string is randomly generated and uses the 
      pod-template-hash as a seed. 
      
      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • RE: Terraform Aws S3 - deny all users except for a specific user

      First, let's understand how roles and policies work on AWS. In order for an user to be able to access a bucket, we can allow it in 3 ways:

      1. Allow it using an IAM policy attached to the role the user is assuming;
      2. Allow it using a bucket policy;
      3. The group of the user has the policy attached to it or there is a policy directly attached to the user which allows access to the bucket.

      These are explicit Allow policies. The user will have access if there is at least on policy from above granting him/her access.

      What is important is that an explicit Deny takes precedence of an explicit Allow. So, if we want to deny access to a specific user, we would want to create a bucket policy with an explicit Deny. In order to do this, https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/ .

      Bucket policy example:

      {
          "Id": "bucketPolicy",
          "Statement": [
              {
                  "Action": "s3:*",
                  "Effect": "Deny",
                  "NotPrincipal": {
                      "AWS": [
                          "arn:aws:iam::1234567890:user/alloweduser"
                      ]
                  },
                  "Resource": [
                      "arn:aws:s3:::examplebucket",
                      "arn:aws:s3:::examplebucket/*"
                  ]
              }
          ],
          "Version": "2012-10-17"
      }
      

      Terraform code for this policy:

      data "aws_iam_policy_document" "vulnerability-scans" {
        statement {
          not_principals {
            type = "AWS"
            identifiers = [
              aws_iam_user.circleci.arn
            ]
          }
      
      effect = "Deny"
      
      actions = [
        "s3:*"
      ]
      
      resources = [
        aws_s3_bucket.vulnerability-scans.arn,
        "${aws_s3_bucket.vulnerability-scans.arn}/*",
      ]
      

      }
      }

      posted in Continuous Integration and Delivery (CI
      R
      rosemadder
    • RE: How are damage resistances calculated in tf2 MVM?

      I did a lot of research, and here are the results. Damage type is in the top left corner of each chart. Separate rows for each number of blast resistance upgrades and separate columns for each number of crit resistance upgraded. Tested each for, normal heavy, Demoman with Charging Targe Shield and Fists of Steel.

      enter image description here

      The damage is calculated by tf2 as follows. The damage of a crit pipe is separated into crit damage and normal damage, then each resistance is applied, then both valued are added together, then weapon/buffbanner/VACmedic resistances are applied.

      enter image description here

      From this we also learn that for non-crit damage the first resistance upgrade only reduces dmg by 25% compared to no upgrade. 2 upgrades reduce dmg by 33% compared to 1 upgrade. And 3 upgrades reduce dmg by 50% compared to 2 upgrades.

      So logically, whatever it is you're upgrading (like blast resistance), make sure to upgrade it all the way, before upgrading something else (like bullet resistance). Upgrading a damage resistance only once is probably a waste of credits, as they will be better spent elsewhere (like speed, to dodge rockets, or health regen). If you have 1200 credits, its much better to upgrade 3x blast and 1x bullet (or 1x blast and 3x bullet) than 2x blast and 2x bullet.

      If there are crit soldier robots or crit heavy robots approaching always upgrade crits 3x (not 2x or 1x). If you only have 1050 credits, it would be wiser to upgrade crit 3x and blast 2x, instead of blast 3x and crit 1x. If non-crit robots are the bigger threat, obviously upgrade blast 3x.

      One more thing, I noticed a bug for pyro. Bullet resistances will reduce pyros self fire damage in the same way as fire resistance does. And the effect is even stacked. Very useful to know if you like to jump around the map with detonator and jump height upgrade and don't fancy taking 45 damage with every blast. For consistency reasons this chart is for the scorch shot:

      enter image description here

      posted in Game Testing
      R
      rosemadder
    • Placing structure on a placed block

      I'm playing around with command blocks, and would like to have a structure appear when a block (specifically a copper block) is placed. I want the structure to be placed relative to the block, and the block can be placed in about 20 different positions in total. Ideally, the structure should also be removed once the block is destroyed.

      The code itself only has to run once every 2mins or something like that, so it doesn't have to be terribly well optimised.

      Is there any easy-ish way for me to make this? As of now I have painstakingly placed execute run setblocks for all the positons, but there is definitely a better solution.

      I guess the question can also be rephrased as "How to run a command for all blocks of a type in an area" (and then use clone).

      Any help is appreciated, although I am still very new to commands (I only just started with nbt data) so please explain if it's a strange concept.

      posted in Game Testing
      R
      rosemadder