Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. terrea
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    terrea

    @terrea

    0
    Reputation
    29433
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    terrea Follow

    Best posts made by terrea

    This user hasn't posted anything yet.

    Latest posts made by terrea

    • RE: Can I define a CodePipeline with Terraform that deploys my Terraform resources?

      Step 1: Sign into AWS Console

      Step 2: Navigate to CodePipeline service

      Step 3: Click on the “Create Pipeline” button

      Step 4: Enter the Pipeline name and click on the “Next” button

      Step 5: Select “GitHub” as Source provider and enter the following values

      Repository: https://github.com/cloudacademy/cloud-academy-terraform-pipeline.git

      Branch: master

      Click on the “Next” button

      Step 6: Select the following Build provider

      AWS CodeBuild

      Project name:

      Click on the “Next” button

      Step 7: Select the following Deploy provider

      AWS CodeDeploy

      Deployment provider: S3

      Click on the “Next” button

      Step 8: Review the values and click on the “Create Pipeline” button

      Step 9: Select the following Build provider

      AWS CodeBuild

      Project name:

      Click on the “Next” button

      Step 10: Select the following Deploy provider

      AWS CodeDeploy

      Deployment provider: S3

      Click on the “Next” button

      Step 11: Review the values and click on the “Create Pipeline” button

      Step 12: Select the following Build provider

      AWS CodeBuild

      Project name:

      Click on the “Next” button

      Step 13: Select the following Deploy provider

      AWS CodeDeploy

      Deployment provider: S3

      Click on the “Next” button

      Step 14: Review the values and click on the “Create Pipeline” button

      Step 15: Select the following Build provider

      AWS CodeBuild

      Project name:

      Click on the “Next” button

      Step 16: Select the following Deploy provider

      AWS CodeDeploy

      Deployment provider: S3

      Click on the “Next” button

      Step 17: Review the values and click on the “Create Pipeline” button

      Step 18: Review

      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • RE: Is there aws-vault kind of tool for GCP?

      It sounds like you want https://cloud.google.com/secret-manager .

      From the documentation, it claims to:

      Store API keys, passwords, certificates, and other sensitive data.

      which sounds like it does what you want it to do.

      However, you state

      I would like to keep my tokens encrypted in my operating system’s keychain

      -- I don't see why you would want to do that though, if you have secrets manager. The secrets manager should be the source of truth, and the machine should be authorised via IAM to access specific secrets on demand, not persisted in the OS keychain. If the OS is compromised, the secrets will be exposed, whilst the attacker would need to break secrets manager or IAM in order to access them if they are consumed uniquely from secrets manager.

      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • RE: Ansible / Jinja2 Unexpected templating type error

      Changing list([]) to list resolves that specific error.

      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • RE: What is manual, what is automatic in Continuous Delivery?

      The first thing to do is to define what is meant by "Continuous Integration", "Continuous Delivery", and "Continuous Deployment". They've come to mean different things to different people.

      Continuous Integration comes from Extreme Programming, developed by Kent Beck and others. In Extreme Programming Explained: Embrace Change, Beck says this about Continuous Integration:

      Integrate and build a complete product. If the goal is to burn a CD, burn a CD. If the goal is to deploy a web site, deploy a web site, even if it is to a test environment. Continuous integration should be complete enough that the first deployment of the system is no big deal.

      From an Extreme Programming perspective, Continuous Integration would result in a deliverable, such as a library or package or container in a registry. If the goal is to make the package, that would be sufficient. However, if the team handles building the system, true Continuous Integration would go further and deploy that package or container to an environment. The team would need to define what its goal and deliverable would be - is it a user-facing software system or a package?

      Having a well-controlled staging or test environment is almost certainly required when performing Extreme Programming's form of Continuous Integration or what is often referred to as Continuous Delivery today. These practices require that you have confidence in the ability to deploy software to the production environment on-demand. Without deploying it somewhere, preferably to an environment that is sufficiently like the production environment, how do you intend to have the necessary confidence? This only applies to teams building systems, though - teams building packages may be confident by creating a package somewhere and not publishing it.

      As far as branching, you don't need a branch-per-environment. You can continue to use trunk-based development ( https://trunkbaseddevelopment.com/committing-straight-to-the-trunk/ or https://trunkbaseddevelopment.com/short-lived-feature-branches/ ) with https://trunkbaseddevelopment.com/release-from-trunk/ or https://trunkbaseddevelopment.com/branch-for-release/ . If you are practicing XP's definition of CI or Continuous Delivery, each commit to trunk would result in a build created and deployed somewhere. When you decide to go to production, a person would make the decision to promote the artifacts associated with a specific build to production. If you are branching for release, this would be associated with creating a release branch from trunk. Otherwise, it would be promoting a particular commit hash or tag.

      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • build pipeline with repository: is it advisable to build both on repo and end server

      I am doing a classic build with install (pip/python), lint,test,format on my github repository with github actions, then deploying with ssh (copying the repo on server and deploying with docker/docker-compose).

      Im wondering if it is adviseable to re-run the lint,test,format, install on the server?

      Note the install is not the app install which occurs in docker but the install of yapf, pytest, pylint

      Thank you

      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • Kubernetes Job Metrics in Prometheus

      I want to create a Kubernetes Job object like the example: https://kubernetes.io/docs/concepts/workloads/controllers/job/

      Now imagine I want to have the result (in this case the value of Pi: 3.14159...) available in Prometheus.

      Is this possible?

      In a more complex example, imagine the output of my pod was JSON:

      {
        "foo": 200,
        "bar": 10,
        "baz": 999
      }
      

      I somehow need to denote that I'm interested in foo and baz (but not bar) "being available" in Prom (note I'm not opinionated on whether the solution is push or pull.

      Option 1

      The first option I've thought of is somehow attaching the result 3.14159 to the job and Prom can scrape it as normal.

      Option 2

      Don't run the Pi-generator container at all, instead run something like a Python script that runs the Pi process. Then we can push from Python to Prom.

      Option 3

      Some other way?

      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • How do I run a CI build in a docker image matching the current `Dockerfile` while being resource-aware?

      Given a repository containing a Dockerfile that defines the build environment used by the CI pipeline as well as by the developer (e.g. as a https://code.visualstudio.com/docs/remote/create-dev-container ), the CI pipeline shall fulfill the following requirements:

      1. The CI pipeline shall build the repository's contents in the context of a docker image built from the very Dockerfile present in the working branch (or the result of its merge to the target branch where applicable in the context of a pull request).
      • Rationale: Builds shall be deterministic so that old repository versions shall be built with their original docker image, not with the newest one.
      • Rationale: Changes to the Dockerfile shall be considered automatically without the need for manual user intervention (i.e. local docker build followed by a manual push).
      1. The CI pipeline shall reuse an already built docker image if the applicable Dockerfile has already been "baked" into such an image pushed to the registry.
      • Rationale: Building a docker image on each commit is resource-consuming (time, energy).

      I couldn't find an existing best practice to implement that out of the box (my CI environment being Azure DevOps Pipelines if it matters) so I came with the following concept:

      • Calculate the Dockerfile's hash.
      • Load docker_image_name:$hash from the docker registry.
      • If the load fails, build docker_image_name:$hash from the Dockerfile and push it to the docker registry.
      • Use docker_image_name:$hash (from the registry / from the local cache) to run the CI pipeline (using Azure's https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops#endpoints in my case).

      Questions:

      1. Does this procedure make sense as a solution to my use case?
      2. I can't imagine being the first to realize this use case. Is there an existing mechanism (as part of the docker utilities, as part of the Azure DevOps Pipelines framework or from something completely different) that fulfills my needs?
      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • Does /etc/rancher/k3s/registries.yaml affect `k3s ctr` and `k3s crictl`?

      k3s ctr launches the contianerd CLI, and k3s crictl is the CRI cli. I'm told you configure k3s authentication to the image registries using https://rancher.com/docs/k3s/latest/en/installation/private-registry/ . Does this yaml file configure containerd, and CRI such that these commands no longer require --creds?

      k3s crictl pull     --creds "evancarroll:$TOKEN" docker.io/alpine:3
      k3s ctr images pull --creds "evancarroll:$TOKEN" docker.io/library/alpine:3
      

      And can just be

      k3s crictl pull     docker.io/alpine:3
      k3s ctr images pull docker.io/library/alpine:3
      
      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • docker image push does not work

      I have created a new docker image with a

      # docker build -t my-phpapache:7.4 .
      

      Now, when I try to make

      # docker image push
      

      I'm logged in to a Docker hub in a browser. I see the errors:

      $ docker image push --all-tags docker.io/library/my-phpapache
      The push refers to repository [docker.io/library/my-phpapache]
      f04ed43187a6: Preparing 
      0ff1b9aaef0c: Preparing 
      c18803e039cd: Preparing 
      a735fdc00d49: Preparing 
      de7c912f2726: Preparing 
      e835c99ddfc4: Waiting 
      1f55d9e78afa: Waiting 
      5aefa6797d83: Waiting 
      70af74272d2e: Waiting 
      58e3131f3b01: Waiting 
      a1bae98a9430: Waiting 
      752cff7a1101: Waiting 
      be3607e92e69: Waiting 
      2ffebc0bdeea: Waiting 
      4c94b016478b: Waiting 
      a9019c838a13: Waiting 
      015643a98838: Waiting 
      f2c64a370cec: Waiting 
      d5b8874e6c41: Waiting 
      9eb82f04c782: Waiting 
      denied: requested access to the resource is denied
      
      posted in Continuous Integration and Delivery (CI
      T
      terrea
    • RE: How can I set up Deployment to run at least one pod on each node?

      You can use topologySpreadConstaints for spreading Deployment pods evenly across nodes, regardless of the number of pods. Kubernetes documentation contains https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ , as well as following article - https://medium.com/geekculture/kubernetes-distributing-pods-evenly-across-cluster-c6bdc9b49699#:%7E:text=In%20order%20to%20distribute%20pods,in%20its%20own%20topology%20domain. .

      The .spec.teplate.spec would look something like:

          spec:
            topologySpreadConstraints:
              - maxSkew: 1
                topologyKey: kubernetes.io/hostname
                whenUnsatisfiable: ScheduleAnyway
                labelSelector:
                  matchLabels:
                    type: dummy    
      
      posted in Continuous Integration and Delivery (CI
      T
      terrea