Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. shizuka
    S
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    shizuka

    @shizuka

    0
    Reputation
    29558
    Posts
    3
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    shizuka Follow

    Best posts made by shizuka

    This user hasn't posted anything yet.

    Latest posts made by shizuka

    • Trunk Based Development Deployment Pipeline

      We are currently working on transitioning to Trunk Based Development and starting to look at our deployment pipeline and how we can improve.

      Our current workflow:

      1. All engineers work on the trunk, frequently committing, which is automatically deployed to our dev environment

      2. When QA sign-off on dev, we generate a release by cutting a release branch (releases/v1.0) from trunk which then will get deployed to our UAT environment. This is a manual approval step. Once approved, the release branch is generated dynamically and pushed to the repo, and also deployed.

      3. Test team (QE) then essentially performs E2E testing in UAT and they request no new code is merged apart from only cherry-picks for P1 defects identified. We cherry pick from trunk. Devs also continue working on trunk fixing any other P2/P3 defects or adding new features.

      4. Once QE sign-off on UAT, that snapshot can then be pushed along to the next environments if applicable, and eventually PROD.

      Questions/Problems

      In this approach, UAT is updated from the release branch. How can we handle getting sign-off on future work (P2, P3, features) in UAT? Do we need a separate stage for deploying from DEV?

      Basically, I am trying to figure out the best approach to handle releases when there is a long gap between sign-off and actual release. This will change in the future as we have plans to introduce feature flags and eventually aim to get to true CI/CD

      We currently use Azure DevOps.

      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • RE: cosmosdb_account virtual_network_rule for_each

      A dynamic block does not use the each keyword, but the value of the iterator argument instead. If omitted, it will use the name of the dynamic block instead. Or as https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks states it:

      The iterator argument (optional) sets the name of a temporary variable that represents the current element of the complex value. If omitted, the name of the variable defaults to the label of the dynamic block.

      It's generally fine to omit the iterator argument and just go with the (what should be a) unique name of the block. In your case, replace each.value with virtual_network_rule.value, so you get:

      dynamic "virtual_network_rule" {
        for_each = var.virtual_network_subnet_ids
      

      content {
      id = virtual_network_rule.value
      }
      }

      If you do want to use the iterator argument, it would look something like this:

      dynamic "virtual_network_rule" {
        for_each = var.virtual_network_subnet_ids
        iterator = "rule"
      

      content {
      id = rule.value
      }
      }

      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • RE: Escape quotes and commas in Docker volume paths using bind-mount syntax

      If you have a directory named te,"st, it's trivial to work with using -v: just quote it, keeping in mind that if you have a double quote inside a double quote you need to escape it. So this works:

      $ mkdir te,\"st
      $ touch te,\"st/file1
      $ docker run --rm -v "$PWD/te,\"st:/data" alpine ls /data
      file1
      

      For --mount it's a little trickier. From https://github.com/docker/cli/issues/1480 we know that the argument to --mount is parsed using CSV syntax, so we can use typical CSV escaping to take of things (quote fields that contain commas, and double-quote to escape quotes ("")):

      $ docker run --rm --mount "type=bind,\"src=$PWD/te,\"\"st\",target=/data" alpine ls /data
      file1
      
      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • Vscode/pytest gives me an error when importing

      I asked this question on the regular stackoverflow but i think you guys will be much more competent about this topic. My guess is that is some sort of python setup issue.

      When I write an import line like this from mypackage import something and run it using vscode tools it gives my an error, when I run it using venv python it works. Same for pytest, I have to run it like this: python3 -m pytest tests for it to work. The main thing that i'm trying to accomplish is to be able to run/debug stuff from vscode.

      The steps I took for it to work:

      • Create venv & avtivate it
      • Turned my project into a package
        • Create setup.py
        • pip3 install -e .
      • Create a launch.json file

      The launch.json file:

      {
        "version": "0.2.0",
        "configurations": [
          {
            "name": "Python: Current File",
            "type": "python",
            "request": "launch",
            "program": "${file}",
            "console": "integratedTerminal",
            "args": ["-q", "data"]
          }
        ]
      }
      

      And the setup file:

      from setuptools import setup
      

      setup(
      name='mypackage',
      version='0.1.0',
      packages=['mypackage'],
      scripts=['bin/script'],
      license='LICENSE.txt',

      )

      Expected behaviour:

      • pytest not to give an import error
      • vscode not to give an import error

      What I get:

      • works only when I explicitly run it like this: python3 main.py

      Trying to solve it for a solid hour. Many many many thanks for helping!

      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • RE: How do I run a CI build in a docker image matching the current `Dockerfile` while being resource-aware?

      Does this procedure make sense as a solution to my use case?

      vscode devcontainers are one of the greatest inventions in the DevOps space and I can't believe more people are not talking about them. I think having the local dev environment and the pipeline in sync makes a lot of sense. Though I think the procedure you outlined above could be simplified.

      I can't imagine being the first to realize this use case. Is there an existing mechanism (as part of the docker utilities, as part of the Azure DevOps Pipelines framework or from something completely different) that fulfills my needs?

      I'm currently doing this in some of my https://github.com/DontShaveTheYak/terraform-module-template/blob/main/.github/workflows/test.yml . It is also supported on https://github.com/devcontainers/ci/blob/main/docs/azure-devops-task.md .

      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • RE: How can I retrieve a lost login token for KubeApps?

      When you set up a KubeApps deployment you must install a Kubernetes secret, https://kubeapps.dev/ detail it like this,

      
      kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator
      

      cat <<EOF | kubectl apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
      name: kubeapps-operator-token
      namespace: default
      annotations:
      kubernetes.io/service-account.name: kubeapps-operator
      type: kubernetes.io/service-account-token
      EOF

      kubectl get --namespace default secret kubeapps-operator-token -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo

      So basically it's a https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ installed in the default namespace. You can list these like this,

       kubectl get --namespace default secrets
      

      You specifically want the kubeapps-operator-token and you can get it with the last line,

      kubectl get --namespace default secret kubeapps-operator-token -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}'
      

      The -o go-template='{{.data.token | base64decode}}' is basically just piping it to base64 -d in Go (like this).

      kubectl get --namespace default secret kubeapps-operator-token -o jsonpath='{.data.token}' | base64 -d
      
      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • RE: Does an AWS service automatically assume a needed IAM role?

      I think you may have this a little backwards.

      S3 is a service that you may want to access with a role. S3 would not be accessing anything, things access s3.

      So, if you made IAM role ABC, you could set up policy to allow it to list and write to a specific S3 bucket, for example.

      You could also make role ABC assume-able by another role. In that case, you may have a server with default IAM role (instance profile) XYZ and the assume role policy can state that XYZ can assume ABC, which would then let it access S3.

      Any entity is just one role at a time. So, once your server assumed ABC from XYZ, it effectively is just ABC.

      You can also assume roles from IAM users - but IAM users are generally bad practice as they are very losable.

      Assuming a role is a very explicit operation, you have to do it on purpose. Some programs may "seem" like they do it easily by configuration, but in reality if they are assuming a role, they are running very specific code for it, similarly to what you would do to assume the role from a command line.

      Example Tutorial

      Here is a decent AWS tutorial for assuming roles in a CLI to get familiar with it.

      https://aws.amazon.com/premiumsupport/knowledge-center/iam-assume-role-cli/

      The final command looks like this:

      aws sts assume-role --role-arn \
      "arn:aws:iam::123456789012:role/example-role" \
      --role-session-name AWSCLI-Session
      

      But that assumes role arn:aws:iam::123456789012:role/example-role is set up to allow your current role (whatever it is) to assume it. Note that role assumption within an account only requires the target role to allow it, but role assumption between two AWS accounts requires the source and target role to be set upt o allow it.

      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • RE: Unable to connect from my local system to ec2 instance created by terraform script

      The issue was that the default route was missing in the routing table.

      resource "aws_route" "update" {
          provider               = aws.docdb_peer
          route_table_id         = "${aws_vpc.docdb_peer.default_route_table_id}"
          destination_cidr_block = "0.0.0.0/0"
          gateway_id             = "${aws_internet_gateway.gw_connect.id}"
      }
      

      Adding this solved the issue.

      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • Is there a way to get message in Teams and Azure DevOps when someone mentions me in a comment in a Work Item or Pull Request?

      Usually Azure DevOps send the notifications as Email: enter image description here but I want

      • Get a message in a chat in https://www.microsoft.com/en-us/microsoft-teams/group-chat-software
      • Get a notification inside the Azure DevOps environment (for example in top bar)

      when someone mentions me in a comment in a Work Item or Pull Request.

      Is there an extension or special way to having these features?

      posted in Continuous Integration and Delivery (CI
      S
      shizuka
    • What regex engine does GitLab use?

      Let's say I have a rule like this,

      - if: '$CI_COMMIT_TITLE =~ /^(fix|feat|perf|docs|build|test|ci|refactor)\S*:/'
      

      It occurs to me that ^ does not match the start of any line in a multiline regex, only the start of the first line. That raises the question,

      • Does anything match the newline?
      • Is there a multi-line regex mode?

      Where is the GitLab Regex documented? What Regex implementation do they use?

      posted in Continuous Integration and Delivery (CI
      S
      shizuka