Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Trenton
    3. Posts
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by Trenton

    • Azure DevOps, get the triggering branch of the triggering pipeline

      I've got two (YAML) pipelines in Azure DevOps, call them B (application build) and D (application deployment). D deploys to Dev, QA, and Test, each in a separate stage. The QA and Test environments are configured in DevOps to require approval before their respective stages in D are executed. The YAML files behind B and D are in the same DevOps project.

      The code repository is Azure DevOps Git. B is triggered by completed merges to the main branch. D is triggered by successful completion of B. In case it matters, the means by which I've configured D to be triggered by successful completion of B is via

      the Triggers option on this menu for the pipeline with Settings, Validate, and Donwload full YAML as other available commands

      leading to

      the Triggers config tab for the YAML pipeline, between the Variables and History tabs

      So far, this arrangement has worked well.

      Now I want B to be triggered by not only feature pushes to main but by hotfix pushes to any branch named release/*. So,

        trigger:
          - main
          - release/*
      

      The difference is that the hotfixes should be deployed only to Test, not to Dev or QA.

      Therefore, in D, I want to make execution of the Dev and QA deployment stages conditional on the triggering branch of B having been main. Is there some way in D to access from B the value that in B can be referenced as $(Build.SourceBranch)?

      UPDATE: I now learn that the manner I described above for having D triggered by B is itself outmoded, and I should be using something like

      resources:
        pipelines:
          - pipeline: sourcePipeline
            source: 'B'
            trigger:
              branches:
                - main
                - release/*
      

      Is that the correct way to set this up? Do I need to specify the branches here or are they relevant? (I saw one example that simply has trigger: true, which I'm guessing means that the second pipeline should always be run after the first completes. Perhaps branches are specified above only when B may be triggered by lots of branches but D should run after B only when B was triggered by a subset of those.)

      If this is the preferred approach and I switch to it, does the answer to my question become that I can now access B's triggering branch in D through $(resources.pipeline.sourcePipeline.SourceBranch)?

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: Exporting multi-arch Docker image from local registry to .tar file

      This is answered in the https://www.docker.com/blog/multi-platform-docker-builds/ . For your specific situation, you would perform a build and push for each platform you'd like, then create a manifest file and push that. Then you can perform the save. It would look something like this:

      $ docker buildx build --platform linux/amd64 --push -t localhost:5000/myimage:amd64 .
      $ docker buildx build --platform linux/arm64 --push -t localhost:5000/myimage:arm64 .
      $ docker manifest create myimage:latest myimage:amd64 myimage:arm64
      $ docker manifest push localhost:5000/myimage:latest
      $ docker image save -o myimage.tar localhost:5000/myimage:latest
      
      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • How can I get everything to use the same load balancer on DigitalOcean?

      I have a LoadBalancer service, and an Ingress that routes to various ClusterIP services. I've given both of them the following annotation:

      annotations:
        kubernetes.digitalocean.com/load-balancer-id: 'myproject-dev-lb'
      

      A new load balancer was created, although I see no reference to myproject-dev-lb on it. The LoadBalancer service (a Pulsar server) works fine, but my ingresses have stopped working. When I kubectl get my ingress, I get what looks like a previous IP address (might just be similar). When I describe it, it says it's not found. If I delete it and recreate it, it gets the same IP address. The address is pingable, but not listening on port 80. The created load balancer only has a rule for port 6650 (Pulsar).

      I'm not really familiar with the relationship between Ingress, load balancers and services.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • gitlab-runner docker: command not found

      I'm trying to build docker container using GitLab runner. (lead dev left company, and now I have no idea how to do it.)

      I'll share everything I can.

      Build output output error

      as I understand from output runner is assigned correctly, the problem is with docker

      the runner is running as Shell executor

      here is .gitlab-ci.yml

      stages:
        - build
        - deploy
      

      variables:
      STACK_NAME: isnta_api
      VERSION: ${CI_COMMIT_TAG}
      IMAGE_NAME: ${DOCKER_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_TAG}

      before_script:

      • echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
      • echo $STACK_NAME
      • echo $CI_REGISTRY
      • echo $IMAGE_NAME

      build:
      stage: build
      only:
      - tags
      script:
      - docker build -f Dockerfile -t ${IMAGE_NAME} .
      - docker push ${IMAGE_NAME}

      deploy:
      stage: deploy
      only:
      - tags
      script:
      - docker stack deploy --with-registry-auth --compose-file docker-compose.yml ${STACK_NAME}

      docker-compose.yml

      version: '3.7'
      

      services:
      app:
      container_name: ${STACK_NAME}
      image: ${IMAGE_NAME}
      environment:
      PORT: 4042
      APP_ENV: production
      INSTAGRAM_API_KEY: de194cf559msh8d940d38fd1ca47p129005jsnc11b4fd77a36
      INSTAGRAM_API_HOST_NAME: instagram-bulk-profile-scrapper.p.rapidapi.com
      INSTAGRAM_API_BASE_URL: https://instagram-bulk-profile-scrapper.p.rapidapi.com/clients/api/ig/
      networks:
      - nxnet
      ports:
      - 4042:4042
      deploy:
      replicas: 1
      update_config:
      order: start-first

      networks:
      nxnet:
      external: true

      and Dockerfile

      FROM node:14-alpine
      

      COPY . /app

      WORKDIR /app
      RUN apk add --no-cache bash
      RUN npm install

      RUN npm run build

      CMD npm run start:prod

      Any suggestions or tips would be valuable. Thank you in advance

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: Switching to multi-part cloud-init, getting: SyntaxError: invalid syntax

      The problem here was I had

      content_type = "text/part-handler"
      

      I should have had,

      content_type = "text/cloud-config"
      

      For more information:

      • https://cloudinit.readthedocs.io/en/latest/topics/format.html
      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: Docker Compose on AWS

      This question risks being flagged as "too broad", and invites opinion-based answers. With that caveat out of the way I'll try to answer in an unbiased way.

      In these cases, it's best to go back to first principles and architect the application from scratch using the relevant AWS services. If you're deploying an application that is really just a few services that can be composed, you're probably best off using Fargate.

      In any case, I would not suggest using EC2 instances

      In terms of how to go about doing that:

      • Create the VPC if you need it
      • define security groups that will define what traffic is allowed
      • set up the ECR repositories for the images in the application stack
      • convert the services in the docker compose file to ECS task definitions
      • decide how you will manage persistent data? If part of the application is redis, you might consider using an ElasticCache component in the stack.
      • decide how you want to expose the services (ENI? Load balancer?)

      There is also the issue of IAM policies that would need to be attached to various entities, in order to read / write to ECR, etc.

      I would write this all as one or two Terraform modules. One for the network configuration (VPC, subnets) and one for the application itself (ECS cluster, Capacity providers, load balancers, ECR repositories, etc). This is a purely personal choice though, and you could just as well create a CloudFormation stack.

      Continuous deployment could indeed be done with Github Actions, if that's where you are storing the code which describes the app.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: What are the core differences between DevOps and Agile ? And Is both two different approach to solve the similar problem?

      Agile is a methodology to organize (or group) a set of changes to software components in a give period (= sprint), so think about it as a planning tool or so. Whereas the stake holders are the scrum master, the PO, the actual developers, etc.

      DevOps is (mostly) about shipping modified components from one environment (e.g. unit testing) to another one (e.g. acceptance testing), and for which things like CI and CD can be used. Whereas the stake holders are the developers, the testers, operations control people (or SREs if you prefer), etc.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • Newly installed k3s cluster on fresh OS install can not resolve external domains or connect to external resources?

      I'm following the https://rancher.com/docs/rancher/v2.5/en/troubleshooting/dns/ , after step 2 "Add KUBECONFIG for user.", if run this command,

      kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup www.google.com
      

      I get this error,

      nslookup: can't resolve 'www.google.com'
      pod "busybox" deleted
      pod default/busybox terminated (Error)
      

      However, I'm running k3s. It's a single node cluster, and on the same machine that k3s is installed I can run nslookup www.google.com, and everything works. The tutorial doesn't say where to go from there? What could cause DNS failures for external resolution inside of k3s, but not ousdie of k3s?

      My core DNS logs show,

      [ERROR] plugin/errors: 2 google.com. AAAA: read udp 10.42.0.6:40115->1.1.1.1:53: i/o timeout
      [ERROR] plugin/errors: 2 google.com. A: read udp 10.42.0.6:54589->1.1.1.1:53: i/o timeout
      

      And when I run curl on an external server, I get

      command terminated with exit code 6

      While this was the first symptom for me, it turns out that I also can't ping or curl/wget external websites by IP. For these reasons I think the problem is even more complex, and perhaps involves IP tables.

      I uploaded my https://github.com/k3s-io/k3s/files/8921761/k3siptables.log

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: How to keep packages updated and keep in pace with security updates

      You could use pre-made roles like https://galaxy.ansible.com/oefenweb/apt or https://galaxy.ansible.com/geerlingguy/security or https://galaxy.ansible.com/weareinteractive/apt to manage apt for system related security upgrades.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: AKS Network Policy - cannot deny traffic to namespace

      If anyone else has the same issue, please double-check your AKS configuration in Azure and make sure that the Network policy filed in the Networking settings does NOT display None. It should say either Azure or Calico.

      My cluster was created with terraform and even though I had added network_plugin = "azure", I had missed the network_policy = "azure" field which meant that Network Policies would not be applied.

      Also, this setting can only be enabled when creating a new cluster. You cannot enabled it on an existing one.

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • Understanding Jenkins plugins and agent

      I was trying to understand jenkins agents. This page asks to first https://www.jenkins.io/doc/book/using/using-agents/#creating-your-docker-agent . But it doesnt say where to execute these steps?

      Q1. Should we be executing these steps on node or a machine which we want to designate as agent?

      https://www.jenkins.io/doc/book/using/using-agents/#setup-up-the-agent1-on-jenkins asks to setup an agent through Jenkins UI:

      enter image description here

      Q2. Above is nothing but the Jenkins controller UI right?

      But above UI does not seem to accept IP address of the agent node on which we staarted docker agent.

      Q3. Does Jenkins controller automatically discovers running agents reachable on the network?

      Q4. What are exactly Jenkins plugins in relation with agents? Jenkin glossary https://www.jenkins.io/doc/book/glossary/#plugin as "an extension to Jenkins functionality provided separately from Jenkins Core." But that does not explain much of its nature or functionality. https://www.jenkins.io/doc/book/managing/plugins/ also explain plugin installation and management on the controller, but doesnt explain exact nature of their functionality.

      Q4.1. Do plugins run jobs of agent nodes? For example, does https://plugins.jenkins.io/android-emulator/ plugin installed on controller installs and runs android emulator on available agent?

      Q4.2. If yes is the answer to Q4.1, does every plugin need corresponding process to be installed on the agent so that agent can carry out functionality specified in the pluin on the controller?

      PS: Am a noob in Jenkins and overall DevOps stuff and just trying to wrap my head around Jenkins

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • How to figure out optimum location for server for least latency to a target service?

      Given a service URL / IP, how can we find the optimum location to spin up a server at? I suppose generally if we assume the service is on AWS, google cloud or azure, setting up the server in the same region as the target service should get us least latency?

      So in cases of these public clouds, how can you find out which region the target service is hosted at? I have thought like we could try to get the IP from DNS, and then get a location with reverse IP geolocation and use the region containing or near the location we got from the reverse IP geolocation. But I am not sure how accurate would the result be from these random reverse IP geolocation services.

      And in cases of not public clouds, generally how would you figure out which region to place your server at?

      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: How to find an AMI in the new AWS console version?

      Ubuntu AMIs

      Ubuntu https://cloud-images.ubuntu.com/locator/ec2/ for your convenience and reference.

      Rocky Linux

      Rocky Linux also maintains a https://rockylinux.org/cloud-images/ .

      Amazon or other specific vendor AMIs

      The AWS console can be rather limiting, but you can https://aws.amazon.com/blogs/compute/query-for-the-latest-amazon-linux-ami-ids-using-aws-systems-manager-parameter-store/ .

      Get a list of Amazon images:

      aws ec2 describe-images --owners amazon --filters "Name=name,Values=amzn*" --query 'sort_by(Images, &CreationDate)[].Name'
      

      Use AWS Systems Manager paramater store to find the latest one with the name you want:

      aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --region us-east-1 
      

      Then make sure you always use the latest when launching an instance:

      aws ec2 run-instances --image-id $(aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --query 'Parameters[0].[Value]' --output text) --count 1 --instance-type m4.large
      
      posted in Continuous Integration and Delivery (CI
      T
      Trenton
    • RE: My controller is strafing left when I hit both triggers

      Stick Drift.

      Simply put, your controller may be off by a small amount. This can happen for a variety of reasons.

      https://www.youtube.com/watch?v=HCEISv4SaUM Should not be too hard, with access to a pc.

      posted in Game Testing
      T
      Trenton
    • RE: Can I still get Steam achievements if mods are enabled on GTA V?

      Yes. Modding GTA V won't affect the earning of Steam achievements. I was able to attain 100% completion and earn achievements, including https://gta.fandom.com/wiki/Career_Criminal , in GTA V on Steam despite my game being modded.

      posted in Game Testing
      T
      Trenton
    • Is there a true PS5 version of GTA 5?

      I purchased the remastered version of Grand Theft Auto 5 for the PS4 Pro. My question has two parts:

      1. Is there any difference between the
        a) remastered version of GTA 5 for the PS4/PS4 Pro
        and a
        b) "PS5" GTA 5
      2. If there is a difference, does my owning the PS4 remastered disc grant me access to this newer version (as opposed to my needing to make an additional purchase through the playstation store)?

      I have read reports that seem to indicate that the PS5 version is, in fact, different, yet I seem unable to purchase any such a version of the game through the PS store, which leads me to believe that if there is a difference I have been automatically granted access to the newer version of the game.

      posted in Game Testing
      T
      Trenton
    • Is it possible to play GTA V on Xbox 360 with the first disc only?

      I recently bought GTA V for my Xbox 360. I received disc 1 and I still want to play but I don't want to pay for another disc. So my point is, is it possible to play without getting the second disc?

      posted in Game Testing
      T
      Trenton
    • Are there any blocks an enderman can hold but can't place

      The enderman, the block stealing/griefing mob in Minecraft, can pick up a variety of blocks. With commands, we can have it hold an even wider variety (all blocks?).

      Now my question is: Are there any blocks which: can be held but cannot be placed by an enderman in unmodded Minecraft?

      posted in Game Testing
      T
      Trenton
    • Command blocks: Keep executing chain if block fails

      Is it possible to keep running a command block chain if a command in one of the commands "fails"?

      Example: If I use setblock on a position that already contains the specified type of block the whole chain stops executing because of the Could not set the block error.

      I can use destroy at the end of setblock commands, but this results in dropping the destroyed item which causes other issues.

      posted in Game Testing
      T
      Trenton
    • RE: What FPS subgenre is DOOM Eternal?

      These kinds of games are often referred to as Arcade-style shooters. This genera of FPS games are, as you described, defined by their emphasis on movement instead of utilization of cover.

      They have their roots in the original FPS games which existed before the use of cover, stealth, or other complex FPS mechanics were technologically feasible. In the case of DOOM and Wolfenstein these Arcade-style FPS roots are quite strong , as it these were some of the first breakout hits in the whole FPS genera.

      Examples of these games also include Quake franchise, RAGE, DUSK, and even has some similarities with games like Overwatch which focus more on gameplay than realism.

      posted in Game Testing
      T
      Trenton
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1492
    • 1493
    • 1 / 1493