Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. baileigh
    B
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    baileigh

    @baileigh

    1
    Reputation
    30018
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    baileigh Follow

    Best posts made by baileigh

    • RE: JUnit + Maven. How to do something before and after all tests?

      In general, this solution will be useful to anyone who wants to test consistently and in a given order.

      It creates a suite, where in the annotation we list the classes in the order of execution:

      @RunWith (Suite.class)
      @ Suite.SuiteClasses ({OpenConnection.class,
                       GetServerIdTest.class,
                       ModbusStatusCodesTest.class,
                       ModbusSerialTransactionTest.class,
                       CloseConnection.class})
      public class OrderedTestSuite {}
      

      Add the plugin to pom.xml:

      <plugin>
               <groupId> org.apache.maven.plugins </groupId>
               <artifactId> maven-surefire-plugin </artifactId>
               <configuration>
                   <includes>
                       <include> ** / OrderedTestSuite.class </include>
                   </includes>
               </configuration>
      </plugin>
      
      posted in Automated Testing
      B
      baileigh

    Latest posts made by baileigh

    • RE: Is there a way to exclusively manage multiple ssh keys with differing per-key options using ansible?

      One approach would be to use multiple authorized keys files, authorized_keys2 https://serverfault.com/questions/116177/whats-the-difference-between-authorized-keys-and-authorized-keys2 but there's no reason you can't use it and you can specify multiple authorized keys files in sshd config

      AuthorizedKeysFile
      
       Specifies the file that contains the public keys used for
       user authentication.  The format is described in the
       AUTHORIZED_KEYS FILE FORMAT section of sshd(8).  Arguments
       to AuthorizedKeysFile accept the tokens described in the
       TOKENS section.  After expansion, AuthorizedKeysFile is
       taken to be an absolute path or one relative to the user's
       home directory.  Multiple files may be listed, separated by
       whitespace.  Alternately this option may be set to none to
       skip checking for user keys in files.  The default is
       ".ssh/authorized_keys .ssh/authorized_keys2".
      

      this might not be the optimal solution to your problem, but I think you're going to have to hack on ansible yourself to get it to do what you want.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • RE: How do I list pods sorted by label version in Kubernetes?

      I've figured it out

      Lables in K8S are for filtering, and not sorting.

      I've just added a version field outside labels and it worked.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • RE: How does Github Actions work with docker containers?

      Yes your understanding is correct. The full documentation is here https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • What is manual, what is automatic in Continuous Delivery?

      I've read lots of articles about the concept on the internet. I thought I got it, but some statements in other articles make me confused.

      In order to simplify and clarify things, I'll presume I use Git as VCS, I have only master branch (and feature branches based on it), only production environment, and it is a Node.js project.

      Continuous Deployment is clear. Everything will be released automatically.

      Continuous Integration is about making sure that changes don't break things in the code base: master. Once a PR is created from the feature branch to master, my CI tool will run tests, lint, style etc. and the PR will be able to be merged once the pipeline passes.

      My first question: Is it considered as part of "Continuous Integration" to create a deliverable (a js bundle, a docker image etc.) and push it to a registry once the PR is merged to master? My own answer is, no, it should be part of "Continuous Delivery". Is that correct?

      My second question: Do I have to have at least one staging/test environment (and another branch corresponding to this environment) to implement Continuous Delivery? I'm asking that because I read some articles implying that: e.g. https://aws.amazon.com/devops/continuous-integration/ https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment

      If I create a staging/test environment, will the meaning of Continuous Delivery change? The same article implies that deploying to a test environment is done automatically in Continuous Delivery.

      I'm sure there might be different implementations and approaches for CI/CD. To summarize, I'm confused what exactly will be done automatically and manually in Continuous Delivery.

      Thanks in advance.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • RE: TeamCity run step in docker

      Two things I would try:

      1. Add "Build" Step before testing:

        Runner Type: .NET.

        Command: build.

        Also it may be required to add another build configuration to Restore the project before build.

      2. Execute this build step in "verbose" mode to get more logging information.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • How to write bash or shell script in SSM run command and execute in linux ec2 instance?

      I want to run the bash or shell script in Linux using the SSM run command. But I don't want to write it on the Linux server and execute it remotely using the SSM command. I want to write a script in the run command itself.

      Is there any way to run a script from the AWS console? Apart from the run command is there any other way to run the whole script?

      I am referring to the multiline script not only one command.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • Server-side Gitlab URL rewriting?

      Our old BitBucket server had a URL like this,

      • https://bitbucket.acme.net/scm/acmegroup/acme-project.git

      And, we're wanting to move to GitLab with a DNS switch which has a different URL-convention namely

      • https://gitlab.acme.net/acmegroup/acme-project.git

      The /scm/ is what I'd like to fix. Is there a way to rewrite URLs through configuration of the GitLab server such that I can make

      • https://gitlab.acme.net/scm/acmegroup/acme-project.git

      Redirect to

      • https://gitlab.acme.net/acmegroup/acme-project.git

      Note, I know I can change the "Custom Git clone URL for HTTP(S)" and "Replaces the clone URL root" under GitLab's admin/application_settings/general.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • bitbucket pipeline to push commits to another repo

      So this is the scenario i have right now, we have a repo (boilerplate) that contains our infrastructure code, docker files, and pipeline scripts, each time we create a new project we copy this repo to a new repo.

      the issue is this when we make a change in the boilerplate repo, we have to manually make the change in all our projects and i'd like to automate this process.

      is it possible to create a git hook or a bitbucket pipeline to automate this process.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • RE: docker image push does not work

      To push, you need write access to the destination repository. With Docker Hub, those repositories must be under your userid, or in an organization where you have write access. The default path you see added, docker.io/library, is used by official images, which makes pulling official images easier. But since you aren't the author of docker official images, that won't work for pushing.

      That means you need to:

      docker build -t your-user/your-image:v1 .
      docker push your-user/your-image:v1
      

      Where your-user is the same user you used when running docker login.

      posted in Continuous Integration and Delivery (CI
      B
      baileigh
    • RE: Apex domain to point to an Openshift ROSA application

      The AWS support finally helped, it was kind of blurry between RH and Route53. So, the trick is to find your endpoint in the Hosting Zone managed by Openshift, in our case:

      *.example-com.test.plvo.p1.openshiftapps.com 
      

      Note the Elastic Load Balancer it's pointing to (the "Value/Route traffic to" column). Then go back to the Hosting Zone in question and add an A record as an alias to the same ELB in its respective region.

      • record name: example.com
      • record type: A
      • route traffic to: [x] Alias
      • Alias to Application and Classic Load Balancer
      • [region of your ROSA cluster]
      • [dualstack.the-ID-of-your-ELB.elb.amazonaws.com.]

      After 60 secs it's already propagated.

      thanks AWS!

      posted in Continuous Integration and Delivery (CI
      B
      baileigh