Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Pearlaqua
    P
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Pearlaqua

    @Pearlaqua

    1
    Reputation
    30184
    Posts
    4
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Pearlaqua Follow

    Best posts made by Pearlaqua

    • RE: What test management tool to manage Cucumber tests?

      I came across XRAY on Jira Cloud and i found its really usefull, I have started using in our company and replaced zephyr . Xray will give the opportunity to maintain both manual and automated test in one single interface. It support Rest api so you can import results from jenkins and update jira issues also.

      https://marketplace.atlassian.com/plugins/com.xpandit.plugins.xray/server/overview

      posted in Test Management
      P
      Pearlaqua

    Latest posts made by Pearlaqua

    • RE: Port forwarding rules with Traefik and Docker.Compose

      I can't tell exactly what you are trying, it looks like you might just not have Traefik and the services on the same network.

      Its also helpful to have a domain that's going to resolve to 127.0.0.1 on wildcard queries.

      This might be a valuable starting point as a working example.

      networks:
        traefik:
          name: demo_traefik
      

      services:

      proxy:
      image: traefik:latest
      command: >
      --accesslog=true
      --api.insecure=true
      --log.level=DEBUG
      --providers.docker=true
      --providers.docker.network=demo_traefik
      ports:
      - "80:80"
      - "8080:8080"
      networks:
      - traefik
      volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      labels:
      traefik.http.routers.demo_traefik.rule: Host(traefik.localtest.me)
      traefik.http.services.demo_traefik.loadbalancer.server.port: 8080

      example:
      image: nginx
      networks:
      - traefik
      scale: 4
      labels:
      traefik.http.routers.demo_nginx.rule: Host(example.localtest.me)
      traefik.http.services.demo_nginx.loadbalancer.server.port: 80

      whoami:
      image: traefik/whoami
      networks:
      - traefik
      labels:
      traefik.http.middlewares.whoami_strip.stripprefix.prefixes: /whoami
      traefik.http.routers.whoami.rule: Host(example.localtest.me) && PathPrefix(/whoami)
      traefik.http.routers.whoami.middlewares: whoami_strip

      docker compose up and then visit http://example.localtest.me/whoami in your browser.

      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • RE: which path has to be specified in httpGet handler in kubernetes?

      path is used to declare the service's routing address / interface address / API

      When using livenessProbe.httpGet as a health checker, a health query is required, i.e. it calls a uri to confirm the service health based on the returned status code

      The composition of this uri: scheme://podIp:port/path == http://10.244.171.183:8080/index.html

      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • RE: how to automate helm deployments in github actions

      Got it working with this helm github action

      https://github.com/marketplace/actions/helm-3

      add secret to github as your .kube/config

      and this workflow step:

      steps:  
            - name: 'check it out'
              uses: actions/checkout@v3
            - name: helm-deploy
              uses: WyriHaximus/github-action-helm3@v2.0
              with:
                exec: helm upgrade logstash /github/workspace/elk/logstash/ --install --wait --atomic --namespace=default --set=app.name=logstash --values=/github/workspace/elk/logstash/values.yaml
                kubeconfig: '${{ secrets.KUBECONFIG }}'
      
      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • What is a standard way to quickly deploy a LAMP installation with a virtual host and certbot?

      Say I rent an IaaS environment and want to quickly deploy a LAMP installation with a virtual host and SSL certificate by Bash alone (no programs such as Ansible would be used).

      Is there some standard way to do so?

      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • RE: The best practice to set up cpanel with mongoDB on a cloud server!

      At a minimum you need access to the shell. Without the shell, you will work as if you were 20 years ago or more. If you will host just one web application, at least you also need a public ip or domain preferable with https

      Also the usage of environment variables to avoid hardcoded values are required.

      The following strategies assume that you have shell access and a public ip or domain.

      let's not reinvent the wheel

      Dokku

      Dokku is an open-source platform-as-a-service (PaaS). If you're familiar with Heroku, you can consider Dokku a private Heroku that you manage.

      Basically, after the installation, you will have a private git url in which if you push some standard code, the app will be deployed.

      • https://blog.back4app.com/what-is-dokku/
      • https://vimeo.com/68631325
      • https://www.freecodecamp.org/news/how-to-build-your-on-heroku-with-dokku/

      I don't know any other similar open source platform.

      Buddy

      https://buddy.works/pricing

      In this platform you could automate some tasks. It is like a mini jenkins

      Minimal manual implementation

      • Create a github repository
      • Add a Dockerfile to your source code
      • Use environment variables instead hardcoded values of database connection in the source code
      • Install docker and git in the server
      • Start a mongodb using docker and volumes
      • After tests on your localhost, push to github
      • Enter to the cloud and clone the git repository
      • Execute the docker build to generate the image
      • Execute docker run -p 80:xyz. xyz* is the port of your application
      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • RE: Azure KQL query to display list of VMs which were not patch for since 1 months

      I was able to achieve that

      Update
      //|where OSType != "Linux" and UpdateState == "Needed" and Optional == "false"
      | where Classification in ("Security Updates", "Critical Updates") 
      | where PublishedDate 
      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • Reuse block string without processing it on the fly

      I have a block of code that work fine at the moment. But reuse that code is a little ugly.

      pipeline {
          agent any
          stages {
              stage('Stage 1') {
                  steps {
                      script {
                          withCredentials([
                              gitUsernamePassword(credentialsId: 'jenkins-credentials', gitToolName: 'Default', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD')
                          ]) {
                              sh '''#!/bin/bash
                              export GIT_USERNAME=${GIT_USERNAME};
                              export GIT_PASSWORD=${GIT_PASSWORD};
                              export PROYECT_DIRECTORY=${PROYECT_DIRECTORY};
                              export CHECKOUT_POINT=${CHECKOUT_POINT};
                              export GIT_HTTPS_REPO_DEPLOY=${GIT_HTTPS_REPO_DEPLOY};
                              export MARIADB_HOSTNAME=${MARIADB_HOSTNAME};
      
                          ./scripts/awesome_script.sh
                          '''
                      }
                  }
              }
          }
      }
      

      }

      The problem is when I apply if statements to use diferents scripts bash inside sh() block. The script ./scripts/awesome _script.sh need that variables exported. Therefore the code looks like this.

      script {
                          withCredentials([
                              gitUsernamePassword(credentialsId: 'jenkins-credentials', gitToolName: 'Default', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD')
                          ]) {
                              sh '''#!/bin/bash
                              export GIT_USERNAME=${GIT_USERNAME};
                              export GIT_PASSWORD=${GIT_PASSWORD};
                              export PROYECT_DIRECTORY=${PROYECT_DIRECTORY};
                              export CHECKOUT_POINT=${CHECKOUT_POINT};
                              export GIT_HTTPS_REPO_DEPLOY=${GIT_HTTPS_REPO_DEPLOY};
                              export MARIADB_HOSTNAME=${MARIADB_HOSTNAME};
      
                          ./scripts/awesome_script.sh
                          '''
                          if(env.APP_ENV == 'testing'){
                              sh '''#!/bin/bash
                              export GIT_USERNAME=${GIT_USERNAME};
                              export GIT_PASSWORD=${GIT_PASSWORD};
                              export PROYECT_DIRECTORY=${PROYECT_DIRECTORY};
                              export CHECKOUT_POINT=${CHECKOUT_POINT};
                              export GIT_HTTPS_REPO_DEPLOY=${GIT_HTTPS_REPO_DEPLOY};
                              export MARIADB_HOSTNAME=${MARIADB_HOSTNAME};
      
                              ./scripts/awesome_script_2.sh
                              '''
                          }
                      }
                  }
      

      Very ugly, right ? 😕 😞

      What I am looking is something similar like to this, but with correct syntax:

      script {
                          env.STRING_BLOCK_WITHOUT_PROCESSING = '''
                              export GIT_USERNAME=${GIT_USERNAME};
                              export GIT_PASSWORD=${GIT_PASSWORD};
                              export PROYECT_DIRECTORY=${PROYECT_DIRECTORY};
                              export CHECKOUT_POINT=${CHECKOUT_POINT};
                              export GIT_HTTPS_REPO_DEPLOY=${GIT_HTTPS_REPO_DEPLOY};
                              export MARIADB_HOSTNAME=${MARIADB_HOSTNAME};
                          ''';
      
                      withCredentials([
                          gitUsernamePassword(credentialsId: 'jenkins-credentials', gitToolName: 'Default', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD')
                      ]) {
                          sh '''#!/bin/bash
                          ${STRING_BLOCK_WITHOUT_PROCESSING}
      
                          ./scripts/awesome_script.sh
                          '''
                          if(env.APP_ENV == 'testing'){
                              sh '''#!/bin/bash
                              ${STRING_BLOCK_WITHOUT_PROCESSING}
                              
                              ./scripts/awesome_script_2.sh
                              '''
                          }
                      }
                  }
      

      Thanks for all help you can give me.

      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • RE: Setup Folder When Setting up Kubernetes Storage

      You can't control the name of the Persistent Volume nor the directory that it creates on disk (making it hard to connect to again if needed).

      I'm not entirely sure what you mean here. When you use a https://kubernetes.io/docs/concepts/storage/persistent-volumes/ to request a volume from Kubernetes, you specify the name of the claim. You don't directly specify the name of the PersistentVolume that will be created by the PersistentVolumeClaim, but that's okay: you mount the volume via the claim.

      That is, if I have:

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pgsql-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 80Gi
      

      Then I can mount this in a pod like this:

      apiVersion: v1
      kind: Pod
      metadata:
        name: example-pod
      spec:
        containers:
        - name: postgres
          image: docker.io/postgres:14
        volumes:
          - name: psql-data
            persistentVolumeClaim:
              claimName: psql-data
      

      Kubernetes doesn't care about the folder structure on the volume; once the volume is mounted, your application can create whatever folder structure it wants.

      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • RE: AWS-Terraform VPC difference between aws_route_table and aws_route

      These are two ways of accomplishing the same thing with limitations.

      Note that you cannot use both methods for the same table. In other words if you create routes in the aws_route_table you cannot associate routes created with aws_route

      Terraform currently provides both a standalone Route resource and a Route Table resource with routes defined in-line. At this time you cannot use a Route Table with in-line routes in conjunction with any Route resources. Doing so will cause a conflict of rule settings and will overwrite rules.

      Q1. The route block in aws_route_table is creating a route inside the routing table without the need to create a separate route using aws_route.

      Q2. Yes, they accomplish the same result.

      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua
    • RE: Server-side Gitlab URL rewriting?

      GitLab omnibus and self-managed installs ship with NGINX which is exposed and documented here,

      • https://docs.gitlab.com/omnibus/settings/nginx.html#inserting-custom-nginx-settings-into-the-gitlab-server-block

      You basically edit the gitlab.rb file, edit nginx['custom_gitlab_server_config'] and then rerun gitlab-ctl reconfigure to regenerate the nginx confs and restart nginx.

      posted in Continuous Integration and Delivery (CI
      P
      Pearlaqua