Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. obi
    O
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    obi

    @obi

    0
    Reputation
    29881
    Posts
    3
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    obi Follow

    Best posts made by obi

    This user hasn't posted anything yet.

    Latest posts made by obi

    • RE: What is limit of runs does Azure Devops pipeline keeps?

      Project settings > Settings enter image description here

      posted in Continuous Integration and Delivery (CI
      O
      obi
    • How to upgrade nodes in a kubernetes cluster?

      For keeping the nodes of a kubernetes cluster (e.g. AWS EKS) up to date with kernel bugfixes etc, how should the existing nodes be replaced once an updated image becomes available (i.e. once the ASGs have been reconfigured so new nodes spawn with a more recent AMI)?

      For example, is there a command to cordon off all existing (or old) nodes from pod scheduling? Is there a benefit to performing rolling restarts of deployments etc, before otherwise attempting to drain the nodes? Will fresh nodes automatically spin up in proportion to pod scheduling demand, even when other remaining nodes (cordoned or partway drained) are nearly idle/unoccupied? Or is it better to disengage the autoscaler and perform manual scaling? How soon after a node is drained would the instance be automatically terminated? Will manually deleting a node (in kubernetes) cause the aws cluster autoscaler to terminate that instance immediately? Or should termination be done with the aws cli? Will persistent data be lost if a node is deleted (or terminated) before it fully drained? Also, can some pods be granted exceptions from getting evicted (e.g. stateful long-running interactive user sessions, such as jupyterhub), while still ensuring their host node does get refreshed as soon as those pods finish? And if so, can this be overridden (when there is an urgent security patch)?

      posted in Continuous Integration and Delivery (CI
      O
      obi
    • RE: known_hosts module reports changed when nothing has changed

      It seems that the issue you've filed, Ansible Issue # https://github.com/ansible/ansible/issues/78598 was noticed. It was possible to reproduce, got https://github.com/ansible/ansible/issues/78598#issuecomment-1225872041 and an easyfix label.

      Since further verification and testing is outstanding, you might be able to https://github.com/ansible/ansible/issues/78598#issuecomment-1225872041 into your own known_host.py and test it. You could provide after testing an update on the Ansible Issue.

      posted in Continuous Integration and Delivery (CI
      O
      obi
    • Limit and request decleration

      it's needed to add "" OR '' to the request/limit configurations? For example:

        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 1m
            memory: 50Mi
      

      vs.

        resources:
          limits:
            cpu: "100m"
            memory: "300Mi"
          requests:
            cpu: "1m"
            memory: "50Mi"
      
      posted in Continuous Integration and Delivery (CI
      O
      obi
    • RE: Filtering AWS SQS Tags using JQ

      Take heart, I've been struggling with JMESPath query expressions for the past couple of years! It's been a long time getting used to them.

      In your example output, Tags is a JSON object rather than a list, so the [] in your JMESPath expression is causing the error. Try this as your jq command:

      jq '.Tags.Name'
      

      You'll get:

      "bar-queue"
      

      You can use the -r option to jq to get the output without quotes.

      However I think you can do better by having the aws command do this filtering for you. Try the --query option:

      aws sqs list-queue-tags --region sa-east-1 --queue-url  --query 'Tags.Name'
      

      Yes, the expression for the aws command doesn't have the leading .. If you don't want the quotes around the output, add the option --output text.


      Edited to add these notes about JMESPath expressions for jq vs. aws:

      The leading . character isn't the only difference between expressions that jq likes and the ones the aws command likes. I don't have an exhaustive list, but I have needed to use different quotes around path names for one program than I used for the other.

      There are pretty good tutorials on query expressions for each of these programs:

      • jq : https://stedolan.github.io/jq/tutorial/
      • aws : https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html

      And a website that's useful for interactively trying out expressions, once you've figured out how the site works: https://jmespath.org . It includes a link to another, more general tutorial.

      posted in Continuous Integration and Delivery (CI
      O
      obi
    • Ansible: How to run ad-hoc command with multiple environnements?

      Given the following architecture:

      ├── ansible.cfg
      ├── hosts
      │   ├── production
      │   └── staging
      ├── production
      │   ├── group_vars
      │   │   ├── all.yml
      │   │   ├── mygroup.yml
      │   │   └── mygroup2.yml
      │   ├── host_vars
      │   │   ├── mhost1.yml
      │   │   └── mhost2.yml
      │   └── production_playbook.yml
      └── staging
          ├── group_vars
          │   ├── all.yml
          │   ├── mygroup.yml
          │   └── mygroup2.yml
          ├── host_vars
          │   ├── mhost1.yml
          │   └── mhost2.yml
          └── staging_playbook.yml
      

      The content of ansible.cfg is:

      [defaults]
      

      inventory=hosts

      The content of the hosts/production and hosts/staging file is the same:

      [all] 
      [mygroup]
      mhost1
      

      [mygroup2]
      mhost2

      staging/group_vars/all.yml, mygroup.yml, mygroup2.yml contains all:

      ansible_user: root
      

      staging/host_vars/mhost1.yml and mhost2.yml contains both (with their respective ip):

      ansible_host: xxx.xxx.xxx.xx
      

      staging_playbook.yml contains:

      ---
      - hosts: all
        tasks:
          - name: ping all in staging 
            ping:
      
      • hosts: mhost1
        tasks:

        • name: ping mhost1 in staging
          ping:
      • hosts: mhost2
        tasks:

        • name: ping mhost2 in staging
          ping:

      In the production environment, the production_playbook.yml is similar:

      ---
      - hosts: all
        tasks:
          - name: ping all in production
            ping:
      
      • hosts: mhost1
        tasks:

        • name: ping mhost1 in production
          ping:
      • hosts: mhost2
        tasks:

        • name: ping mhost2 in production
          ping:

      The only differences are in production/host_vars where I have different IP addresses.

      If I run:

      ansible-playbook staging/staging_playbook.yml

      or

      ansible-playbook production/production_playbook.yml

      it all works fine so I guess the architecture is correct.

      Now, my question is: how can I target a specific host in a specific environment with an Ansible ad-hoc command?

      for example:

      ansible mhost1 -i hosts/staging -m ping
      

      which is not working and gives the output:

      mhost1 | UNREACHABLE! => {
          "changed": false,
          "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mhost1: nodename nor servname provided, or not known",
          "unreachable": true
      }
      

      EDIT:

      I found out that if I move my inventory to their respective environments like:

      ├── ansible.cfg
      ├── production
      │   ├── group_vars
      │   │   ├── all.yml
      │   │   ├── mygroup.yml
      │   │   └── mygroup2.yml
      │   ├── host_vars
      │   │   ├── mhost1.yml
      │   │   └── mhost2.yml
      │   ├── hosts
      │   └── production_playbook.yml
      └── staging
          ├── group_vars
          │   ├── all.yml
          │   ├── mygroup.yml
          │   └── mygroup2.yml
          ├── host_vars
          │   ├── mhost1.yml
          │   └── mhost2.yml
          ├── hosts
          └── staging_playbook.yml
      

      and remove from ansible.cfg:

      inventory=hosts
      

      I can execute my ad-hocs command however to run the playbook I have to specify the inventory like:

      ansible-playbook staging/staging_playbook.yml -i staging/hosts

      The architecture I found allowing me to execute my playbooks per environment without having to specify an inventory while executing the command and allowing me to execute ad-hoc commands on a specific host in a specific environnement is this one:

      ├── ansible.cfg
      ├── hosts
      │   ├── production
      │   └── staging
      ├── production
      │   ├── group_vars
      │   │   ├── all.yml
      │   │   ├── mygroup.yml
      │   │   └── mygroup2.yml
      │   ├── host_vars
      │   │   ├── mhost1.yml
      │   │   └── mhost2.yml
      │   ├── hosts
      │   └── production_playbook.yml
      └── staging
          ├── group_vars
          │   ├── all.yml
          │   ├── mygroup.yml
          │   └── mygroup2.yml
          ├── host_vars
          │   ├── mhost1.yml
          │   └── mhost2.yml
          ├── hosts
          └── staging_playbook.yml
      

      This seems weird since I have an inventory per environment and an inventory outside containing the same code. What is the proper way to achieve the same thing?

      posted in Continuous Integration and Delivery (CI
      O
      obi
    • Cloud-config in Terraform does not work, nothing seems to happen and it fails silently

      I'm currently provisioning a machine with Cloud Config in user_data (I've also tried user_data_base64). When I run my command in Terraform I have,

      resource "aws_instance" "web" {
          user_data_base64 = filebase64("${path.module}/scripts/web-init-script.yml")
      

      However, nothing happens. It silently fails. What's the problem? The contents of web-init.script.yml are,

      ❯ cat scripts/db-init-script.sh
      #cloud-config
      package_update: true
      package_upgrade: true
      

      fqdn: db.acme.com
      prefer_fqdn_over_hostname: true
      hostname: db

      packages:
      - podman

      posted in Continuous Integration and Delivery (CI
      O
      obi
    • RE: Cloud-Init Script Won't Run?

      There are a few issues with this cloud-init script.

      • #groups is commented out, so the list beneath it makes it invalid YAML.
      • Even if we uncomment groups, - ubuntu: [root,sys] is invalid because the default user already has a group specification. If you look near the bottom of /etc/cloud/cloud.cfg on a launched instance, you should see something like:
      system_info:
         # This will affect which distro class gets used
         distro: ubuntu
         # Default user name + that default users groups (if added/used)
         default_user:
           name: ubuntu
           lock_passwd: True
           gecos: Ubuntu
           groups: [adm, audio, cdrom, dialout, dip, floppy, lxd, netdev, plugdev, sudo, video]
           sudo: ["ALL=(ALL) NOPASSWD:ALL"]
           shell: /bin/bash
      

      So you either need to remove the ubuntu line from your groups definition, or remove - default from your user definition and define the ubuntu user manually.

      • Your ssh key is also pasted as invalid YAML. Keep it as a single line
      • In your runcmd, sudo su doesn't work that way in a script. Instead you should sudo -u for an individual user to run an individual command. Additionally, sudo isn't needed for root permissions as all user scripts run as root already.

      Fixing all of these issues, we should get a resulting cloud-init script that looks something like this:

      #cloud-config
      

      Add the empty group hashicorp.

      groups:

      • hashicorp

      Add users to the system. Users are added after groups are added.

      users:

      • default
      • name: terraform
        gecos: terraform
        shell: /bin/bash
        primary_group: hashicorp
        sudo: ALL=(ALL) NOPASSWD:ALL
        groups: users, admin
        lock_passwd: false
        ssh_authorized_keys:
        • ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCMT6uYfhWx8jOmiGR9ryIPKcWy2ceqvyZ4Q4+q5QTiZtlbWxbP37YZnT8uQhyjB4QRR1cjOyvGKC3Zu0Isy0eHIx2lGm/7B04bsoWWAUqhJmYWMZlivnHtJJJ4P5gnvXiRNmFg9iK07C7ClggNBAQZZHUeA5wcnvvHT/pDkGUjMUqgLvmWRJqJM9qLT717e229F1Fyh+sYtAj08qmcFF1JCs2D33R46RQ8YBMpQqmWLfjuJDUrjdvMu7Mv3aPpaeUWuYoC90iHR9XMeNonrtRlx21nY3CoMZ0AOpeNl999UzyMJrsvN4qm6byK2Pc6jrEyKr9jI8SvMEGdSWgqr/Hd

      Downloads the golang package

      packages:

      • golang-go

      Sets the GOPATH & downloads the demo payload

      runcmd:

      • mkdir /home/terraform/go
      • chown terraform:hashicorp /home/terraform/go
      • sudo GOPATH=/home/terraform/go -u terraform go install github.com/hashicorp/learn-go-webapp-demo@latest
      posted in Continuous Integration and Delivery (CI
      O
      obi
    • RE: What is this iOS cover shooter called?

      It sounds like the games you describe could well be Epoch:

      released in November 2011, and Epoch 2:
      released in November 2013; both developed by Uppercut Games.

      Many of the details you mention with regard to the story, mechanics, and weapon systems tie in closely with what is appreciable in the game trailers and with what is written in the 39 pages of the Epoch wiki. E.g. for details related to the post-apocalyptic plot and objective, where, against the backdrop of a devastated city-scape filled with robotic foes you play a humanoid robot tasked with finding and protecting a princess (Amelia), see here: https://epoch.fandom.com/wiki/General_story And with specific reference to the "sucker punch" weapon you mention, one of a myriad selectable primary weapons, see here: https://epoch.fandom.com/wiki/Sucker-Punch .

      posted in Game Testing
      O
      obi
    • Is this an accurate description of how armor works across the Dark Souls series in PvE?

      Introduction

      I've seen a number of posts arguing back and forth about the effectiveness of armor across the Dark Souls series, but I've never found anything that spells out the actual math. Do the descriptions below accurately capture how armor works in the different games?

      I'm looking to confirm (or correct) my understanding of the mechanics to help me evaluate different build options. For example, if I feel like I'm taking a lot of damage while wearing the Leather Set, knowing these calculations would help me figure out what kind of difference I might expect from using the Elite Knight Set instead.

      Dark Souls 1

      In DS1, armor is flat damage reduction. Suppose that a monster hits me with a Strike attack that deals 300 dmg. I'm an SL 4 Warrior wearing a full set of Elite Knight armor +10. That gives me Strike Def of 244, so I take 300 - 244 = 56 damage. Is that correct?

      Dark Souls 2

      In DS2, armor is still flat damage reduction, but the numbers are generally bigger. Suppose that a monster hits me with a Strike attack that deals 800 dmg. I'm an SL 12 Warrior wearing a full set of Elite Knight armor. That gives me Strike Def of 565, so I take 800 - 565 = 235 damage. Is that correct?

      Dark Souls 3

      In DS3, armor is a combination of flat damage reduction and % mitigation. Suppose that a monster hits me with a Strike attack that deals 300 dmg. I'm an SL 7 Warrior wearing a full set of Elite Knight armor. That gives me Strike Def of 87 and a Strike Mitigation % of 20.952, so I take (300 - 87) * (1 - 0.20952) = 168.37224 damage. Is that correct?

      Other Questions

      My understanding is that some attacks do mixed damage, such as a flaming sword doing showing and fire. How do those damage calculations work?

      posted in Game Testing
      O
      obi