Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Kadir
    K
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Kadir

    @Kadir

    0
    Reputation
    29746
    Posts
    3
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Kadir Follow

    Best posts made by Kadir

    This user hasn't posted anything yet.

    Latest posts made by Kadir

    • Creating a hostgroup from a super-set of hosts

      I have two host files with different hosts, hostGroups and super-set in each file like below.

      /ets/hostFiles/TestBoxes

      [TestBox:children]
      groupA
      groupB
      

      [groupA]
      ...
      ...

      [groupB]
      ...
      ...

      /ets/hostFile/ProdBoxes

      [ProdBox:children]
      groupPA
      groupPB
      

      [groupPA]
      ...
      ...

      [groupPB]
      ...
      ...

      Now i would like to create a Primary host group with TestBox and ProdBox

      Please let me know if this is possible.

      Note: This is just an example... I have 100+ files like this. and I want to create primary group with few of the files. hosts=all is not my requirement.

      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • How to determine which files are ignored by a .helmignore file?

      I'm installing a local Helm chart; however, I keep getting an error Error: UPGRADE FAILED: create: failed to create: Request entity too large: limit is 3145728. From searching other SO/Stack Exchange questions, this is typically caused by unnecessary files mistakenly included in the chart. The way to resolve this, is to add those entries to a https://helm.sh/docs/chart_template_guide/helm_ignore_file/ .

      My chart has a .helmignore file, which should be excluding all unnecessary artifacts within my chart, but I'm still getting the error. So my thought is that my .helmignore entries aren't quite targeting the files that they should be.

      I've tried running them with the --debug flag (didn't show anything more interesting):

      upgrade.go:139: [debug] preparing upgrade for chartname
      upgrade.go:520: [debug] copying values from chartname (v11) to new release.
      upgrade.go:147: [debug] performing update for chartname
      upgrade.go:319: [debug] creating upgraded release for chartname
      Error: UPGRADE FAILED: create: failed to create: Request entity too large: limit is 3145728
      helm.go:88: [debug] Request entity too large: limit is 3145728ffff
      create: failed to create
      helm.sh/helm/v3/pkg/storage/driver.(*Secrets).Create
              helm.sh/helm/v3/pkg/storage/driver/secrets.go:164
      helm.sh/helm/v3/pkg/storage.(*Storage).Create
              helm.sh/helm/v3/pkg/storage/storage.go:69
      helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade
              helm.sh/helm/v3/pkg/action/upgrade.go:320
      helm.sh/helm/v3/pkg/action.(*Upgrade).RunWithContext
              helm.sh/helm/v3/pkg/action/upgrade.go:148
      main.newUpgradeCmd.func2
              helm.sh/helm/v3/cmd/helm/upgrade.go:200
      github.com/spf13/cobra.(*Command).execute
              github.com/spf13/cobra@v1.2.1/command.go:856
      github.com/spf13/cobra.(*Command).ExecuteC
              github.com/spf13/cobra@v1.2.1/command.go:974
      github.com/spf13/cobra.(*Command).Execute
              github.com/spf13/cobra@v1.2.1/command.go:902
      main.main
              helm.sh/helm/v3/cmd/helm/helm.go:87
      runtime.main
              runtime/proc.go:255
      runtime.goexit
              runtime/asm_arm64.s:1133
      UPGRADE FAILED
      main.newUpgradeCmd.func2
              helm.sh/helm/v3/cmd/helm/upgrade.go:202
      github.com/spf13/cobra.(*Command).execute
              github.com/spf13/cobra@v1.2.1/command.go:856
      github.com/spf13/cobra.(*Command).ExecuteC
              github.com/spf13/cobra@v1.2.1/command.go:974
      github.com/spf13/cobra.(*Command).Execute
              github.com/spf13/cobra@v1.2.1/command.go:902
      main.main
              helm.sh/helm/v3/cmd/helm/helm.go:87
      runtime.main
              runtime/proc.go:255
      runtime.goexit
              runtime/asm_arm64.s:1133
      

      I also tried it with the --dry-run flag and the chart succeeded. So at this point I'm not sure how to find what's bloating my chart.

      How can I tell which files are actually getting ignored (or which are included) when I run a helm install or helm upgrade?

      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • RE: Trunk Based Development Deployment Pipeline

      There are a few changes that I'd make.

      First, I'd get rid of the QA sign-off on dev before cutting a release branch. I'd look at methods to instill a culture of developer-led testing (especially if that means developing automated tests) on your trunk. Of course, this doesn't mean that your testers shouldn't be using the trunk deployed to the development environment - they can be practicing any manual test cases or doing exploratory testing and giving feedback.

      Second, if you haven't, I'd look at automating the end-to-end testing done in the UAT environment, at least from a regression standpoint. You may want to do some manual testing, especially of an exploratory nature, in UAT, but you want to reduce the burden of manual testing, especially as the system increases in complexity.

      It may not matter much, but I would recommend fixing defects in the release branch and merging back into the trunk. You could also cherry-pick or rebase or some other method, as well. But I've found that going from release branch to development branch is more intuitive.

      Once you have a sign off on the release, apply a tag. You can either get the state of the code back into trunk and deploy the tag in the trunk or you can deploy the head of the release branch. Both are the same thing.

      If you have multiple UAT in progress at once, then you may need multiple environments. However, you're also introducing complexity around a defect found in UAT2 that also impacts UAT1 and managing getting the fixes synchronized. I'd want to understand what makes UAT take so long and what can be done to get an accepted system into production faster to reduce parallel UATs.

      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • Is there aws-vault kind of tool for GCP?

      I would like to keep my tokens encrypted in my operating system’s keychain and use them easily with gcloud CLI.

      So, does https://github.com/99designs/aws-vault for gcp exist?

      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • RE: Variable for Terraform Workspace name?

      The variable I ended up using was

      ${terraform.workspace}
      

      You can find more information about it under "Current Workspace Interpolation"

      • https://www.terraform.io/language/state/workspaces#current-workspace-interpolation
      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • RE: integrate sonarqube with kubernetes

      Assuming you created your sonarqube Service in the defaultnamespace, and using SonarQube default ports, your portal URL would be http://sonarqube.default.svc:9000.

      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • RE: How to figure out optimum location for server for least latency to a target service?

      I think the geolocation of the IP address is generally quite accurate - this is from 15 years of my own anecdotal experience. I can only recall ever having seen an IP address having moved to th "wrong" location once some time in the 2000s.

      In terms of what region is "nearest" - it seems geography is secondary to geopolitics and how fibre-cables are layed.

      This github project: https://github.com/turnkeylinux/aws-datacenters maintains a list of fibre cables and AWS locations, I've extended it before to look at the latest AWS regions and so it can be used for other public cloud providers as well.

      Some (maybe) surprising latency results I found working with people:

      • Norway seems to have better latency to North America than to central Europe (feedback from in-country technical person)
      • Israel better latency to Frankfurt than to Bahrain or Mumbai

      Also given this story and others as part of fallout from the Snowden releases https://www.dw.com/en/call-me-a-german-satellite-and-internet-company-wants-answers-from-the-nsa/a-17571811 I would assume that for low latency to most of Northern Africa and Western Middle East, you are best to choose a server location in Germany or Italy.

      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • RE: How can I efficiently scale a data lake?

      You're definitely not alone in this problem, it's well known. So much so that there's https://blog.eduonix.com/bigdata-and-hadoop/improve-dataops-with-dynamic-indexing-in-data-lake/ . I personally have had issues with partitioning, queries failing completely and overall poor performance. The aforementioned article mentions Varada which I think could be a good solution. How it works:

      What they do is to break down a large dataset into what they call a nano block of 64k rows. Their technology looks at each nano block (thus original dataset) and automatically chooses an index for each nano block.

      posted in Continuous Integration and Delivery (CI
      K
      Kadir
    • RE: Why can't I scan the batgirl suit as a solution to Riddler's riddle?

      I'm an idiot. I had to press and hold the scan key (X in my controls) to scan it, not just press X once ‍♂️

      posted in Game Testing
      K
      Kadir
    • RE: What are the odds for a Pokemon being shiny in Pokemon Scarlet Violet?

      Just like with most of the modern games, Scarlet and Violet share base Shiny odds of 1 in 4096, https://i.stack.imgur.com/m0BM0.jpg .

      Using certain Sandwich recipes, players can activate something called Sparkling Power, a boost that will passively increase Shiny odds, with the maximum reached at level 3.

      The Shiny Charm makes a return in Scarlet and Violet.

      The final method for encountering wild Shinies with individually boosted odds is Mass Outbreaks, which work similarly to how they did in Legends: Arceus. Clearing Pokémon during a Mass Outbreak is how you increase Shiny odds, with modifiers kicking in after you clear 30 Pokémon and 60 Pokémon.

      These are the combined odds:

      Bonus Standard rate Mass Outbreak rate
      Base 1 in 4096 1 in 2048 (30 pkmn)
      1 in 1365.67 (60 pkmn)
      Shiny Charm 1 in 1365.67 1 in 1024 (30 pkmn)
      1 in 819 (60 pkmn)
      Sparkling Power lv. 3 1 in 1024 1 in 819 (30 pkmn)
      1 in 683 (60 pkms)
      Sparkling Power lv. 3 & Shiny Charm 1 in 683 1 in 585 (30 pkmn)
      1 in 512 (60 pkmn)

      Source https://dotesports.com/pokemon/news/what-are-the-shiny-odds-in-pokemon-scarlet-and-violet , pretty much confirmed by https://www.serebii.net/scarletviolet/shinypokemon.shtml .

      posted in Game Testing
      K
      Kadir