Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Mystic
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Mystic

    @Mystic

    0
    Reputation
    29999
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Mystic Follow

    Best posts made by Mystic

    This user hasn't posted anything yet.

    Latest posts made by Mystic

    • Value of succeeded() in Azure DevOps pipeline before first stage is run

      In an Azure DevOps pipeline, what is the value of Agent.JobStatus before any stages have been run? Is it initialized to 'Succeeded' or is it initially undefined or set to null or something like that, only to be set to 'Succeeded' after the first successful execution?

      In other words:

      • If I include succeeded() (which is the same thing as, in(variables['Agent.JobStatus'], 'Succeeded', 'SucceededWithIssues')) in a condition for running the first stage, is it going to return true even though there have been no previous stages to return a successful result?
      • If I include succeeded() in a conditional for running the second stage, where the first stage has a condition that evaluated to false so the first stage was skipped, will it be true even though no previous stages have been executed returning a successful result?

      The way I'd like to think it works is such that succeeded will return true in both places where I use it in the following code, where I want the first stage to run if the original triggering repo branch was 'main' and the second stage to run if the original branch was 'release/*'. (This is a slightly simplified version of the actual logic.) Am I correct?

      stages:
        - template: my-template.yml
          condition: and(succeeded(), eq(variables['resources.pipeline.previousPipeline.SourceBranch'], 'refs/heads/main'))
          parameters:
            ...
        - template: my-template.yml
          condition: and(succeeded(), startsWith(variables['resources.pipeline.previousPipeline.SourceBranch'], 'refs/heads/release/'))
          parameters:
            ...
      

      Alternatively, if succeeded() doesn't work in this manner, would I get what I need if, instead, I were to use not(failed())?

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • How to keep the overview over the entire lifecycle of backlog items

      I'm working as a product owner in a software house. For the project I'm taking care of, we're using Azure DevOps to manage our backlog. The team consists of 3 developers + me as product owner / part time developer.

      By default, Azure DevOps offers the following status values for Bugs and Product Backlog Items:

      • New
      • Approved
      • Committed
      • Done

      For me as product owner it turns out that these status values are not enough to keep the overview over the entire lifecycle of the bugs / product backlog items, because I'm missing an overview of features / backlog items / bugs

      • whose quote must still be approved from the customer
      • that have been released to the integration environment, but must still be approved
      • that have been approved
      • that have been released to the production environment.

      Long story short, the status values in Azure DevOps are covering only a part of the lifecycle of the backlog items.

      To have more overview, I'd need additional values, like

      Statuses before the development starts

      • Customer requires a feature
      • We sent the quote for the feature to the customer
      • Customer approved the quote

      Statuses after the development done

      • Feature / bugfix is released to the integreation environment
      • Customer approved the feature / bugfix
      • Feature / bugfix is released to the integreation environment

      I've read about several possibilitites to achieve this, e.g.

      • Introducing additional status values (some people dissuaded me from this option)
      • Working with Tags
      • Working with Area paths
      • Working with Custom Fields

      I strongly assume that I'm not the only one who's facing this issue. What would you recommend considering your experience so far?

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • Azure Devops solution for max excution time

      Azure Devops has a 60 minutes max execution time for pipelines but sometimes my deployment takes much more than this, the pipeline time out and, since is a time out, no further tasks are run in the pipeline, even the ones that send email/message about the error on the pipeline. I cannot change the timeout (budget issue).

      I am thinking in start the deployment process (is async), finishing the pipeline with a status other than `` success, and using a trigger (I can create a trigger on the system I am deploying to do something when the deploy finish) to call the Azure DevOps API and change the status of the pipeline to success/failed and/or start another pipeline to run the post-deploy steps.

      Is this overenginering? Is there a better way to solve this problem?

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • RE: Run docker-in-docker container alongside Jenkins agent

      I wrote about https://www.rokpoto.com/jenkins-docker-in-docker-agent/ which you may find useful.

      The article answers your questions in detail.

      I'll answer them shortly here as well:

      Why having it running as two containers?

      The first one is the container with Docker client. And it’s basically Jenkins agent, because it extends jenkins/jnlp-agent-docker base image. The second one is the container with Docker daemon inside.

      I install from the official Helm chart. I could not find a good way to add another container there yet, without overwriting the whole pod template.

      You need to put a reference to docker in docker agent in additionalAgents key in helm chart values.yaml. See the example of https://www.rokpoto.com/jenkins-docker-in-docker-agent/#use-docker-in-docker-agent configuration.

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • RE: Cannot start Kubernetes Dashboard

      The URL you're querying ( http://192.168.1.133:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy ) is wrong.

      According to the last yaml file you applied ( https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml ), and as we could guess from your kubectl get pods -A: the kubernetes-dashboard Service is located in the kubernetes-dashboard namespace. Not the default namespace.

      Although if you just want to connect kubernetes dashboard: instead of the kubectl proxy command you run, I would go with kubectl port-forward -n kubernetes-dashboard deploy/kubernetes-dashboard 8443:8443, then open my browser to https://localhost:8443


      Then, there's the case of your SDN. In your kubectl get pods, we can see the kube-flannel pod, in kube-flannel namespace, is in Error.

      Look at the logs for this container, and try to figure out why it does not start (kubectl logs -n kube-flannel kube-flannel-ds-xxxx -p).

      It's been years I didn't setup flannel, although I remember that in addition to applying their RBAC & daemonsets yamls, I also had to patch nodes, allocating them with a CIDR. Eg: kubectl patch node my-node-1 -p '{ "spec": { "podCIDR": "10.32.3.0/24" } }' --type merge (each podCIDR must be unique, each node would have its own range hosting Pods. If I'm not mistaken, each podCIDR must be a subset of flannel's net-conf.json Network subnet -- look at the ConfigMap created while installing flannel).

      And as of your last comment: the error tells us the following

      Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-f6bwx': pods "kube-flannel-ds-f6bwx" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
      

      Looking back at the files you've used setting up flannel, https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml , then https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml . To fix your SDN, you may want to create the following:

      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: flannel-fix
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: flannel
      subjects:
        - kind: ServiceAccount
          name: flannel
          namespace: kube-flannel
      

      And for the record: the kube-flannel-rbac is not necessary in your case. It would be, had you installed flannel from their legacy manifest ( https://github.com/flannel-io/flannel/blob/master/Documentation/k8s-manifests/kube-flannel-legacy.yml ). In your case, that ClusterRoleBinding we're fixing should have been created properly, only applying https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • RE: Is it possible to create multiple tags out from docker-compose?

      Wouldn't this be better using docker build? Not docker compose? I'm pretty sure compose is used to spin up things together, rather than your use case. In which case, yes build supports many tags

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • AKS Network Policy - cannot deny traffic to namespace

      I am trying to implement Network Policies in my test Azure Kubernetes cluster, but I cannot get them to work. I have two namespaces - default and nginx (and others as well, but they shouldn't be affecting the NP).

      I have an nginx deployment in each ns that displays a webpage with some text on '/'. (I have modified the pages slightly so I can recognize which one I'm hitting). I also have a ClusterIP service for each deployment. I deployed a Deny All Network Policy in the nginx namespace that targets all pods inside. However, when I open a shell inside the nginx pod in the default namespace and I do a curl http://servicename.namespace.svc:serviceport (which resolves to the service inside the nginx namespace) I can access the pod despite the Network Policy rule.

      Here are my manifests:

      • nginx in the nginx namespace:
          apiVersion: apps/v1
          kind: Deployment
          metadata:
            name: svet-nginx-deployment
            namespace: nginx
          spec:
            selector:
              matchLabels:
                app: nginx
            replicas: 1
            template:
              metadata:
                labels:
                  app: nginx
              spec:
                containers:
                - name: nginx
                  image: /samples/nginx
                  ports:
                  - containerPort: 80
                  volumeMounts:
                  - name: config-volume
                    mountPath: /usr/share/nginx/html
                volumes:
                - name: config-volume
                  configMap:
                    name: svet-config
      
      • ClusterIP service in the nginx namespace:
      apiVersion: v1
      kind: Service
      metadata:
        name: ingress2
        namespace: nginx
      spec:
        selector:
          app: nginx
        ports:
          - protocol: TCP
            port: 80
            targetPort: 80
      
      • Network Policy in the nginx namespace

      I got this one from github, but I also tested with Default deny all ingress traffic from the official kubernetes https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: default-deny-ingress
        namespace: nginx
      spec:
        podSelector: {}
        policyTypes:
        - Ingress
        ingress: []
      
      • nginx in the default namespace:
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: svet-nginx-deployment
        namespace: default
      spec:
        selector:
          matchLabels:
            app: nginx
        replicas: 1
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: .azurecr.io/samples/nginx
              ports:
              - containerPort: 80
              volumeMounts:
              - name: config-volume
                mountPath: /usr/share/nginx/html
            volumes:
            - name: config-volume
              configMap:
                name: svet-config
      

      • ClusterIP service in the default namespace:
      apiVersion: v1
      kind: Service
      metadata:
        name: ingress1
        namespace: default
      spec:
        selector:
          app: nginx
        ports:
          - protocol: TCP
            port: 80
            targetPort: 80
      

      Please ignore the bad naming - this is only a training environment

      I have tried many different iterations of the Network Policy starting with more complex and moving to the simplest denyall policy that I have pasted, but nothing seems to be working. I have enabled Azure CNI as required.

      Am I missing something?

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • RE: Does Terraform provide a mechanism to find the provider version?

      You can find the version of your providers with terraform version

      ❯ terraform version
      Terraform v1.1.9
      on linux_amd64
      + provider registry.terraform.io/hashicorp/aws v3.75.1
      

      With https://github.com/stedolan/jq

      ❯ terraform version -json | jq .provider_selections
      {
        "registry.terraform.io/hashicorp/aws": "3.75.1"
      }
      

      terraform version -json | jq .provider_selections

      I opened a https://github.com/hashicorp/terraform/issues/31048#issue-1236104011

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • RE: Creating container image without docker

      An image is based on the https://github.com/opencontainers/image-spec/blob/main/spec.md which consists of filesystem layers packaged as tar files, a config json, and a manifest. All of these are referenced with a content addressable digest. If you can create these tar and json files, then you can create an image without a runtime.

      There are various tools that do this for specific cases. I think most are shipped as containers and designed to run in CI pipelines, but you can probably get standalone binaries. However, I don't think any of these work with a Dockerfile, so if you need something that supports that syntax, you'll need a container runtime. There are rootless runtimes, but they typically have prerequisites that need to be performed on the host by root in advance, and there's a performance hit from using them.

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic
    • RE: Should a pipeline fail if early termination is desired?

      I'm going to take an opinionated approach based on the https://12factor.net

      Typically, stages should execute a function, with predictable inputs and outputs.

      If stage 3 executes the function "compute next version" and the output of that function is "The next version is the same as the previous version", then even if there is no version change, the stage has successfully executed it's function. Again, the inputs and outputs are predictable.

      Similarly for the fourth stage (Publish). If you have built the software, but the version hasn't changed, you still have a new artifact. It has a different build, but the same version. This could happen if for example you change the environment or configuration it's deployed into. As the 12factor app puts it:

      The twelve-factor app uses strict separation between the build, release, and run stages. https://12factor.net/build-release-run

      So your publish function should deliver a new artifact, with the same version, but different build metadata (ie pipeline execution number, date, etc.)

      In either case, no you do not need to fail the pipeline (nothing has broken, no changes are necessary) and yes you should execute the last stage, which should implement the actual deployment strategy you use.

      posted in Continuous Integration and Delivery (CI
      Mystic
      Mystic