Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Trenton
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Trenton

    @Trenton

    0
    Reputation
    29859
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Trenton Follow

    Best posts made by Trenton

    This user hasn't posted anything yet.

    Latest posts made by Trenton

    • Azure DevOps, get the triggering branch of the triggering pipeline

      I've got two (YAML) pipelines in Azure DevOps, call them B (application build) and D (application deployment). D deploys to Dev, QA, and Test, each in a separate stage. The QA and Test environments are configured in DevOps to require approval before their respective stages in D are executed. The YAML files behind B and D are in the same DevOps project.

      The code repository is Azure DevOps Git. B is triggered by completed merges to the main branch. D is triggered by successful completion of B. In case it matters, the means by which I've configured D to be triggered by successful completion of B is via

      the Triggers option on this menu for the pipeline with Settings, Validate, and Donwload full YAML as other available commands

      leading to

      the Triggers config tab for the YAML pipeline, between the Variables and History tabs

      So far, this arrangement has worked well.

      Now I want B to be triggered by not only feature pushes to main but by hotfix pushes to any branch named release/*. So,

        trigger:
          - main
          - release/*
      

      The difference is that the hotfixes should be deployed only to Test, not to Dev or QA.

      Therefore, in D, I want to make execution of the Dev and QA deployment stages conditional on the triggering branch of B having been main. Is there some way in D to access from B the value that in B can be referenced as $(Build.SourceBranch)?

      UPDATE: I now learn that the manner I described above for having D triggered by B is itself outmoded, and I should be using something like

      resources:
        pipelines:
          - pipeline: sourcePipeline
            source: 'B'
            trigger:
              branches:
                - main
                - release/*
      

      Is that the correct way to set this up? Do I need to specify the branches here or are they relevant? (I saw one example that simply has trigger: true, which I'm guessing means that the second pipeline should always be run after the first completes. Perhaps branches are specified above only when B may be triggered by lots of branches but D should run after B only when B was triggered by a subset of those.)

      If this is the preferred approach and I switch to it, does the answer to my question become that I can now access B's triggering branch in D through $(resources.pipeline.sourcePipeline.SourceBranch)?

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: Exporting multi-arch Docker image from local registry to .tar file

      This is answered in the https://www.docker.com/blog/multi-platform-docker-builds/ . For your specific situation, you would perform a build and push for each platform you'd like, then create a manifest file and push that. Then you can perform the save. It would look something like this:

      $ docker buildx build --platform linux/amd64 --push -t localhost:5000/myimage:amd64 .
      $ docker buildx build --platform linux/arm64 --push -t localhost:5000/myimage:arm64 .
      $ docker manifest create myimage:latest myimage:amd64 myimage:arm64
      $ docker manifest push localhost:5000/myimage:latest
      $ docker image save -o myimage.tar localhost:5000/myimage:latest
      
      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • How can I get everything to use the same load balancer on DigitalOcean?

      I have a LoadBalancer service, and an Ingress that routes to various ClusterIP services. I've given both of them the following annotation:

      annotations:
        kubernetes.digitalocean.com/load-balancer-id: 'myproject-dev-lb'
      

      A new load balancer was created, although I see no reference to myproject-dev-lb on it. The LoadBalancer service (a Pulsar server) works fine, but my ingresses have stopped working. When I kubectl get my ingress, I get what looks like a previous IP address (might just be similar). When I describe it, it says it's not found. If I delete it and recreate it, it gets the same IP address. The address is pingable, but not listening on port 80. The created load balancer only has a rule for port 6650 (Pulsar).

      I'm not really familiar with the relationship between Ingress, load balancers and services.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • gitlab-runner docker: command not found

      I'm trying to build docker container using GitLab runner. (lead dev left company, and now I have no idea how to do it.)

      I'll share everything I can.

      Build output output error

      as I understand from output runner is assigned correctly, the problem is with docker

      the runner is running as Shell executor

      here is .gitlab-ci.yml

      stages:
        - build
        - deploy
      

      variables:
      STACK_NAME: isnta_api
      VERSION: ${CI_COMMIT_TAG}
      IMAGE_NAME: ${DOCKER_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_TAG}

      before_script:

      • echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
      • echo $STACK_NAME
      • echo $CI_REGISTRY
      • echo $IMAGE_NAME

      build:
      stage: build
      only:
      - tags
      script:
      - docker build -f Dockerfile -t ${IMAGE_NAME} .
      - docker push ${IMAGE_NAME}

      deploy:
      stage: deploy
      only:
      - tags
      script:
      - docker stack deploy --with-registry-auth --compose-file docker-compose.yml ${STACK_NAME}

      docker-compose.yml

      version: '3.7'
      

      services:
      app:
      container_name: ${STACK_NAME}
      image: ${IMAGE_NAME}
      environment:
      PORT: 4042
      APP_ENV: production
      INSTAGRAM_API_KEY: de194cf559msh8d940d38fd1ca47p129005jsnc11b4fd77a36
      INSTAGRAM_API_HOST_NAME: instagram-bulk-profile-scrapper.p.rapidapi.com
      INSTAGRAM_API_BASE_URL: https://instagram-bulk-profile-scrapper.p.rapidapi.com/clients/api/ig/
      networks:
      - nxnet
      ports:
      - 4042:4042
      deploy:
      replicas: 1
      update_config:
      order: start-first

      networks:
      nxnet:
      external: true

      and Dockerfile

      FROM node:14-alpine
      

      COPY . /app

      WORKDIR /app
      RUN apk add --no-cache bash
      RUN npm install

      RUN npm run build

      CMD npm run start:prod

      Any suggestions or tips would be valuable. Thank you in advance

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: Switching to multi-part cloud-init, getting: SyntaxError: invalid syntax

      The problem here was I had

      content_type = "text/part-handler"
      

      I should have had,

      content_type = "text/cloud-config"
      

      For more information:

      • https://cloudinit.readthedocs.io/en/latest/topics/format.html
      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: Docker Compose on AWS

      This question risks being flagged as "too broad", and invites opinion-based answers. With that caveat out of the way I'll try to answer in an unbiased way.

      In these cases, it's best to go back to first principles and architect the application from scratch using the relevant AWS services. If you're deploying an application that is really just a few services that can be composed, you're probably best off using Fargate.

      In any case, I would not suggest using EC2 instances

      In terms of how to go about doing that:

      • Create the VPC if you need it
      • define security groups that will define what traffic is allowed
      • set up the ECR repositories for the images in the application stack
      • convert the services in the docker compose file to ECS task definitions
      • decide how you will manage persistent data? If part of the application is redis, you might consider using an ElasticCache component in the stack.
      • decide how you want to expose the services (ENI? Load balancer?)

      There is also the issue of IAM policies that would need to be attached to various entities, in order to read / write to ECR, etc.

      I would write this all as one or two Terraform modules. One for the network configuration (VPC, subnets) and one for the application itself (ECS cluster, Capacity providers, load balancers, ECR repositories, etc). This is a purely personal choice though, and you could just as well create a CloudFormation stack.

      Continuous deployment could indeed be done with Github Actions, if that's where you are storing the code which describes the app.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: What are the core differences between DevOps and Agile ? And Is both two different approach to solve the similar problem?

      Agile is a methodology to organize (or group) a set of changes to software components in a give period (= sprint), so think about it as a planning tool or so. Whereas the stake holders are the scrum master, the PO, the actual developers, etc.

      DevOps is (mostly) about shipping modified components from one environment (e.g. unit testing) to another one (e.g. acceptance testing), and for which things like CI and CD can be used. Whereas the stake holders are the developers, the testers, operations control people (or SREs if you prefer), etc.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • Newly installed k3s cluster on fresh OS install can not resolve external domains or connect to external resources?

      I'm following the https://rancher.com/docs/rancher/v2.5/en/troubleshooting/dns/ , after step 2 "Add KUBECONFIG for user.", if run this command,

      kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup www.google.com
      

      I get this error,

      nslookup: can't resolve 'www.google.com'
      pod "busybox" deleted
      pod default/busybox terminated (Error)
      

      However, I'm running k3s. It's a single node cluster, and on the same machine that k3s is installed I can run nslookup www.google.com, and everything works. The tutorial doesn't say where to go from there? What could cause DNS failures for external resolution inside of k3s, but not ousdie of k3s?

      My core DNS logs show,

      [ERROR] plugin/errors: 2 google.com. AAAA: read udp 10.42.0.6:40115->1.1.1.1:53: i/o timeout
      [ERROR] plugin/errors: 2 google.com. A: read udp 10.42.0.6:54589->1.1.1.1:53: i/o timeout
      

      And when I run curl on an external server, I get

      command terminated with exit code 6

      While this was the first symptom for me, it turns out that I also can't ping or curl/wget external websites by IP. For these reasons I think the problem is even more complex, and perhaps involves IP tables.

      I uploaded my https://github.com/k3s-io/k3s/files/8921761/k3siptables.log

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: How to keep packages updated and keep in pace with security updates

      You could use pre-made roles like https://galaxy.ansible.com/oefenweb/apt or https://galaxy.ansible.com/geerlingguy/security or https://galaxy.ansible.com/weareinteractive/apt to manage apt for system related security upgrades.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: AKS Network Policy - cannot deny traffic to namespace

      If anyone else has the same issue, please double-check your AKS configuration in Azure and make sure that the Network policy filed in the Networking settings does NOT display None. It should say either Azure or Calico.

      My cluster was created with terraform and even though I had added network_plugin = "azure", I had missed the network_policy = "azure" field which meant that Network Policies would not be applied.

      Also, this setting can only be enabled when creating a new cluster. You cannot enabled it on an existing one.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton