Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. fbio
    F
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    fbio

    @fbio

    1
    Reputation
    29359
    Posts
    3
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    fbio Follow

    Best posts made by fbio

    • RE: Mock test automation with Wiremock + Xray + Jenkins possible?

      TL;DR: Jenkins supports pipelines from code. Make your build, test and deploy pipeline in code and you are not dependent on a specific CD tool. If you can do it from the command line and with API's you can combine and chain any tool or test-step. Use Pipelines as Code!

      Is it possible to automate this complete process?

      Probably, but not out of the box with some clicks in a UI. Worst case you need to write some custom code or shell/bat-scripts to tie everything together in Jenkins. This makes Jenkins very flexible, if you can do it on the commandline you can do it in Jenkins.

      • Example of running sh/bat scripts in Jenkins: https://www.jenkins.io/doc/pipeline/tour/running-multiple-steps/
      • Both XRay and Jira have an API, which you can call from your command line scripts, with curl for example. Check out the XRay API Import Execution Results REST documentation.

      Can this also be done via Python?

      Same as above, if you can do it on the command line you can do it in Python with subprocess. Also Postman has a command line interface, maybe check out the Newman examples: https://learning.postman.com/docs/running-collections/using-newman-cli/command-line-integration-with-newman/

      Is this only possible via Junit?

      No, I don't see what JUnit has to do with it, the tool examples you give don't need the JUnit test-runner to run. Maybe using standard Jenkins steps you want to use JUnit test result output format which those tools support. XRay supports JUnit reports as does Postman, so it might make importing results easier. You could also use the Postman JSON output and rewrite it to a XRay JSON import, or parse the Postman JSON with code and call the XRay API.

      Other reads:

      • https://www.thoughtworks.com/radar/techniques/pipelines-as-code
      posted in Automated Testing
      F
      fbio

    Latest posts made by fbio

    • How to lock a user using ansible?

      Given a username, if that user exists, lock it, otherwise keep it missing.

      The most likely module is ansible.builtin.user. There passing password: "*" and shell: "/usr/sbin/nologin" mostly achieves the lock behavior, but it also creates the user. The state property only has possible values present and absent, neither of which describes the desired behavior.

      One can obtain a fact on the user presence using ansible.builtin.getent and then conditionally use ansible.builtin.user. Is there a better way?

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • RE: Share DNS name between two k8s services deployed in aws

      While I don't have extensive experience with it, I believe Istio and other service meshes are built for this (and more complex things).

      https://www.alibabacloud.com/blog/597011

      Looks like it has a "global" DNS name format for inter cluster routing in addition to the local format.

      Services in the same Kubernetes cluster have the same DNS suffix, for example, svc.cluster.local. Kubernetes DNS provides the DNS resolution capability for these services. To deploy a similar configuration for services in the remote cluster, we name the services in the remote cluster in the .global format.

      The Istio installation package includes a CoreDNS server that provides the DNS resolution capability for these services. To use the DNS resolution capability, you need to direct the DNS service of Kubernetes to the CoreDNS service. Then, the CoreDNS service will act as the DNS server for the .global DNS domain.

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • How do I provide a config file (.env) when starting a container?

      I'm trying to install a GUI on Bitcoin Lightning Network Testnet. I have 2 lightning nodes running and connected together. Both in containers.

      Now I would like to have a GUI interface to manage my nodes. I used this image: https://hub.docker.com/r/apotdevin/thunderhub

      I can start the container and access it on my local computer on port 3000.

      My question is how do I configure Thunderhub to connect to my nodes ?

      If I follow the instructions on the Thunderhub home page, they refer me to this page:

      https://docs.thunderhub.io/setup/#server-accounts

      But the instructions are:

      You can add accounts on the server by adding this parameter to the .env file:
      .....
      

      How should I proceed from there ? Does it means I must rebuild the image ? How do I provide a '.env' config file when starting a container

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • How to understand and resolve Jenkin job failure - Angular 13 app?

      I'm new to Jenkins environment.

      I was asked to upgrade Angular 7.x app which was built 3 to 5 yrs back into Angular 13. Upgrading the Angular app is done. now I want to deploy my UI changes through JENKINs. But my build getting failed. its been 4 days I'm trying to resolve the issues but couldn't make it,

      Could someone help me to point out what the exact issue am facing and what needs to be done in order to resolve the build issue.

      package.json

      {
        "name": "@xxx-some-theme/this-is-my-angular-app-ui",
        "version": "4.1.0",
        "description": "Angular Frontend UI for Fraud Center APP TCC Limits",
        "scripts": {
          "ng": "ng",
          "build": "ng build",
          "build:watch": "ng build --watch",
          "test:ci": "ng test --browsers=ChromeHeadless --watch=false",
          "lint": "eslint --ext .js,.ts src",
          "jenkins": "run-s lint test:jenkins build:jenkins",
          "prebuild": "npm run lint",
          "pretest": "npm run lint",
          "build:jenkins": "ng build --prod --aot --build-optimizer --optimization --progress false",
          "test:jenkins": "ng test --code-coverage --watch false --progress false",
          "postbuild": "save-build-version",
          "postjenkins": "save-build-version"
        },
        "files": [
          "/dist",
          "nginx.conf",
          "mime.types"
        ],
        "dependencies": {
          "@angular/animations": "13.3.11",
          "@angular/cdk": "7.3.5",
          "@angular/common": "13.3.11",
          "@angular/compiler": "13.3.11",
          "@angular/core": "13.3.11",
          "@angular/forms": "13.3.11",
          "@angular/http": "7.2.15",
          "@angular/platform-browser": "13.3.11",
          "@angular/platform-browser-dynamic": "13.3.11",
          "@angular/router": "13.3.11",
          "@ngx-translate/core": "11.0.1",
          "@ngx-translate/http-loader": "4.0.0",
          "cerialize": "0.1.18",
          "chart.js": "3.8.0",
          "core-js": "2.5.3",
          "font-awesome": "4.7.0",
          "lodash": "4.17.5",
          "moment": "2.24.0",
          "moment-timezone": "0.5.23",
          "ngx-bootstrap": "5.5.0",
          "ngx-cookie": "2.0.1",
          "ngx-toastr": "8.2.1",
          "primeicons": "1.0.0",
          "primeng": "9.1.3",
          "quill": "1.3.7",
          "rxjs": "6.5.2",
          "rxjs-compat": "6.6.7",
          "tslib": "^2.4.0",
          "zone.js": "~0.11.5"
        },
        "devDependencies": {
          "@angular-devkit/build-angular": "13.3.8",
          "@angular-eslint/builder": "13.5.0",
          "@angular-eslint/eslint-plugin": "13.5.0",
          "@angular-eslint/eslint-plugin-template": "13.5.0",
          "@angular-eslint/schematics": "13.5.0",
          "@angular-eslint/template-parser": "13.5.0",
          "@angular/cli": "13.3.8",
          "@angular/compiler-cli": "13.3.11",
          "@angular/language-service": "13.3.11",
          "@types/jasmine": "3.3.12",
          "@types/jasminewd2": "2.0.6",
          "@types/lodash": "4.14.123",
          "@types/moment-timezone": "0.5.12",
          "@types/node": "11.11.6",
          "@typescript-eslint/eslint-plugin": "5.27.1",
          "@typescript-eslint/parser": "5.27.1",
          "codelyzer": "4.5.0",
          "eslint": "^8.17.0",
          "jasmine-core": "3.3.0",
          "jasmine-spec-reporter": "4.2.1",
          "karma": "6.4.0",
          "karma-chrome-launcher": "2.2.0",
          "karma-coverage": "2.2.0",
          "karma-coverage-istanbul-reporter": "2.0.5",
          "karma-jasmine": "2.0.1",
          "karma-jasmine-html-reporter": "1.4.0",
          "karma-phantomjs-launcher": "1.0.4",
          "lodash-es": "4.17.21",
          "npm-run-all": "4.1.5",
          "sass": "1.22.9",
          "sonar-scanner": "3.1.0",
          "ts-node": "8.0.3",
          "tsconfig-paths": "4.0.0",
          "tslib": "2.4.0",
          "tslint": "6.1.3",
          "tslint-sonarts": "1.9.0",
          "typescript": "4.6.4"
        }
      }
      

      karma.config.js

      module.exports = function (config) {
        config.set({
          basePath: '',
          frameworks: ['jasmine', '@angular-devkit/build-angular'],
          plugins: [
            require('karma-jasmine'),
            require('karma-phantomjs-launcher'),
            require('karma-jasmine-html-reporter'),
            require('karma-coverage-istanbul-reporter'),
            require('@angular-devkit/build-angular/plugins/karma')
          ],
          client:{
            clearContext: false, // leave Jasmine Spec Runner output visible in browser
          },
          coverageIstanbulReporter: {
            dir: require('path').join(__dirname, '../coverage/ngx-this-is-my-app'),
            reports: ['html', 'lcovonly', 'text-summary'],
            fixWebpackSourcePaths: true,
            thresholds: {
              global: {
                statements: 80,
                lines: 80,
                branches: 80,
                functions: 80,
              },
            }
          },
          reporters: ['progress', 'kjhtml'],
          port: 9876,
          colors: true,
          logLevel: config.LOG_INFO,
          autoWatch: true,
          browsers: ['PhantomJS', 'ChromeHeadless'],
          singleRun: false,
          restartOnFileChange: true,
          customLaunchers: {
            ChromeHeadless: {
              base: "Chrome",
              flags: [
                "--headless",
                "--disable-gpu",
                "--no-sandbox",
                "--remote-debugging-port=9222"
              ],
            }
          },
        });
      };
      

      part 1 Error

      Key: srcDir Value: null
      [Pipeline] echo
      Key: nodeVersion Value: 14
      [Pipeline] echo
      Key: npmCommand Value: npm run jenkins
      [Pipeline] echo
      Key: preScript Value: [npm ci]
      [Pipeline] echo
      Key: postScript Value: null
      [Pipeline] echo
      Key: phantomjs Value: true
      [Pipeline] deleteDir
      [Pipeline] unstash
      [Pipeline] echo
      NODE PATH: /SOME-LOCATION/javascript/node-v14.17.3-linux-x64
      [Pipeline] tool
      [Pipeline] sh
      + curl -f -o phantomjs-2.1.1.zip https://artifacts.TEST-COMPANY.int/artifactory/maven-external-release/phantomjs/phantomjs/2.1.1/phantomjs-2.1.1.zip
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      

      0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
      43 24.9M 43 10.8M 0 0 9277k 0 0:00:02 0:00:01 0:00:01 9277k
      100 24.9M 100 24.9M 0 0 17.6M 0 0:00:01 0:00:01 --:--:-- 17.6M
      [Pipeline] sh

      • unzip -o phantomjs-2.1.1.zip
        Archive: phantomjs-2.1.1.zip
        inflating: phantomjs-static
        [Pipeline] sh
      • mkdir -p bin
        [Pipeline] sh
      • mv phantomjs-static bin/phantomjs
        [Pipeline] withEnv
        [Pipeline] {
        [Pipeline] sh
      • npm ci

      nice-napi@1.0.2 install /SOME-LOCATION/jenkins/workspace/we-dont-need-this/PLEASE/IGNORE/THIS/node_modules/nice-napi
      node-gyp-build

      gyp WARN install got an error, rolling back install
      gyp ERR! configure error
      gyp ERR! stack Error: connect ETIMEDOUT dummy-ip-address:443
      gyp ERR! stack at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1148:16)
      gyp ERR! System Linux 3.10.0-1160.59.1.el7.x86_64
      gyp ERR! command "/SOME-LOCATION/javascript/node-v14.17.3-linux-x64/bin/node" "/SOME-LOCATION/javascript/node-v14.17.3-linux-x64/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
      gyp ERR! cwd /SOME-LOCATION/jenkins/workspace/we-dont-need-this/PLEASE/IGNORE/THIS/node_modules/nice-napi
      gyp ERR! node -v v14.17.3
      gyp ERR! node-gyp -v v5.1.0
      gyp ERR! not ok

      part 2 Error

      - primeng/spinner [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/splitbutton [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/steps [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/table [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/tabmenu [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/tabview [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/terminal [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/tieredmenu [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/toast [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/togglebutton [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/toolbar [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/treetable [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/tree [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/tristatecheckbox [es2015/esm2015] (https://github.com/primefaces/primeng)
      - primeng/virtualscroller [es2015/esm2015] (https://github.com/primefaces/primeng)
      - @another-custom/lib-common [es2015/esm2015] (git+https://globalrepository.company-local.int/stash/scm/app-name/ngx-demo.git)
      - @another-custom-2/lib-common [es2015/esm2015] (git+https://globalrepository.company-local.int/stash/scm/app-name/ngx-demo-utils.git)
      - @another-custom-3/lmodel-data [es2015/esm2015] (git+https://globalrepository.company-local.int/stash/scm/app-name/cc-demo-data-model.git)
      - primeng [es2015/esm2015] (https://github.com/primefaces/primeng)
      Encourage the library authors to publish an Ivy distribution.
      09 08 2022 01:13:36.349:INFO [karma-server]: Karma v6.4.0 server started at http://localhost:9876/
      09 08 2022 01:13:36.352:INFO [launcher]: Launching browsers Chrome with concurrency unlimited
      09 08 2022 01:13:36.357:INFO [launcher]: Starting browser Chrome
      09 08 2022 01:13:36.358:ERROR [launcher]: No binary for Chrome browser on your platform.
        Please, set "CHROME_BIN" env variable.
      npm ERR! code ELIFECYCLE
      npm ERR! errno 1
      npm ERR! CUSTOM/LIBRARY/WE/CAN/IGNORE/THIS-ui@4.1.0 test:jenkins: `ng test --code-coverage --watch false --progress false`
      npm ERR! Exit status 1
      npm ERR! 
      npm ERR! Failed at the CUSTOM/LIBRARY/WE/CAN/IGNORE/THIS-ui@4.1.0 test:jenkins script.
      npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
      

      npm ERR! A complete log of this run can be found in:
      npm ERR! SOME-LOCATION/jenkins/workspace/WE/CAN/IGNORE/THIS/nodejs_data/.npm/_logs/2022-08-09T06_13_36_812Z-debug.log
      ERROR: "test:jenkins" exited with 1.
      npm ERR! code ELIFECYCLE
      npm ERR! errno 1
      npm ERR! CUSTOM/LIBRARY/WE/CAN/IGNORE/THIS-ui@4.1.0 jenkins: run-s lint test:jenkins build:jenkins
      npm ERR! Exit status 1
      npm ERR!
      npm ERR! Failed at the CUSTOM/LIBRARY/WE/CAN/IGNORE/THIS-ui@4.1.0 jenkins script.
      npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

      npm ERR! A complete log of this run can be found in:
      npm ERR! SOME-LOCATION/jenkins/workspace/WE/CAN/IGNORE/THIS/nodejs_data/.npm/_logs/2022-08-09T06_13_36_957Z-debug.log
      [Pipeline] }
      [Pipeline] // withEnv
      [Pipeline] echo
      Error during processing of stage BuildNodeApplication caused by script returned exit code 1
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] echo
      hudson.AbortException: script returned exit code 1

      What am I doing wrong? What needs to be done here?

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • RE: Limit and request decleration

      It is not. In the case you mention.

      The only reason you will have to add quotes surrounding values is when there's room for confusion, converting your YAML into a JSON.

      In your sample, you probably noticed that both versions work. Not setting any quotes doesn't end up with an error.

      However, the following would be invalid:

        resources:
          limits:
            cpu: 1
            memory: 300Mi
      

      If you try to create a workload with a resource limits such as above, you would get an error: object is invalid. To fix it, you will have to quote your cpu limit, such as:

        resources:
          limits:
            cpu: "1"
            memory: 300Mi
      

      This has to do with your client, converting this yaml into a JSON before posting to the API. A JSON that says "resource": { "limits": { "cpu": 1 } } is valid on paper. Although from the server perspective, you're using an integer, when a string is expected (as per podSpec / resources schema).

      So: no, it is not needed to have "" or '' surrounding resource requests or limits, most of the time. Until your requested value can be casted as an integer.

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • integrate sonarqube with kubernetes

      I am trying to integrate sonarqube on kubernetes , my sonarqube service is up and running in when i do "kubectl get services". but i am unable to fetch that url i.e. (Poratal access url). For deployment purpose i used sonar image which is present inside docker hub https://hub.docker.com/_/sonarqube%5B!%5Benter image description here] https://i.stack.imgur.com/f4S8V.png ] https://i.stack.imgur.com/f4S8V.png

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • RE: How can I get an installation of k3s working with KubeApps?

      Installing k3s and kubeapps

      1. Install k3s, ( https://rancher.com/docs/k3s/latest/en/installation/ )

        curl -sfL https://get.k3s.io | sh -
        
      2. Add KUBECONFIG for user. ( https://devops.stackexchange.com/q/16043/18965 )

        export KUBECONFIG=~/.kube/config
        mkdir ~/.kube 2> /dev/null
        sudo k3s kubectl config view --raw > "$KUBECONFIG"
        chmod 600 "$KUBECONFIG"
        
      3. Install Helm, ( https://helm.sh/docs/intro/install/ )

        curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
        
      4. Setup Kubeapps, ( https://kubeapps.dev/ )

        1. Install Kubeapps

          helm repo add bitnami https://charts.bitnami.com/bitnami
          helm repo update
          helm install -n kubeapps --create-namespace kubeapps bitnami/kubeapps
          
        2. Configure Kubeapps

          kubectl create --namespace default serviceaccount kubeapps-operator
          

          kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator

          cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: Secret
          metadata:
          name: kubeapps-operator-token
          namespace: default
          annotations:
          kubernetes.io/service-account.name: kubeapps-operator
          type: kubernetes.io/service-account-token
          EOF

          kubectl get --namespace default secret kubeapps-operator-token -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo

        3. Play (note we have a slight modification with https://devops.stackexchange.com/a/16106/18965 )

          kubectl port-forward -n kubeapps --address 0.0.0.0 svc/kubeapps 8080:80
          

      Removal

      sudo k3s-uninstall.sh
      

      Remove helm, and caches

      sudo rm -rf /usr/local/bin/helm ~/.cache/helm /root/.cache/helm

      Remove logs

      sudo rm -rf /var/log/pods/

      Your own cluster config/keys/kubeconfig

      sudo rm -rf ~/.kube

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • RE: bitbucket pipeline to push commits to another repo

      Don't know if you have found an answer already, but yes, it is possible.

      there are many possible solutions here for the automation tools, but one of the obvious is to create automation code that updates your repositories as needed (for example by using terraform with git provider) and then running this terraform code in bitbucket pipeline on your boilerplate repo.

      At this simple scenario - you need to get a list of all repositories that you need to update in some way, by reading it from some file/api or anything else.

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • Setup multiple Raspberry Pi different network configurations

      For multiple projects we are setting up 2 to 6 RPi's per project. Each project runs on a different network and for some of these networks we can use DHCP, for others the RPi's have to connect using a static IP address.

      Also every RPi needs a different hostname in sequential order. For instance for a project with three RPi's, the hostnames need to be PROCCESS1, PROCESS2, PROCESS3.

      All RPi's run the exact same software and every RPi has a single monitor connected.

      The current workflow is:

      1. Burn a prepared OS image for every RPi on a microSD
      2. Boot up the RPi using the microSD and manually set the hostname
      3. If needed, set the IP address to a static one
      4. Turn on the overlay file system so the system is read-only
      5. Repeat for the amount of RPi's needed

      This is a tedious task where errors are made and takes a lot of time. Step 1 can not be automated, since we don't have the hardware (altough the EtcherPro would eventually save a lot of time). Automating step 3 and 4 would be great if that can be realized. I've read a bit about Ansible and also encountered Chef and Puppet but I haven't dived into any of them yet.

      Is using Ansible with playbooks a good direction to develop further? Would it be possible to automate the setup using this tool? Is there a better/easier solution or would it not be possible at all? If someone has some tips or can guide in the right direction that would be great.

      posted in Continuous Integration and Delivery (CI
      F
      fbio
    • RE: Kubernetes Failing with Self Signed Docker Registry Certificate

      Finally got it working. I had to install the root certificate by copying it to /etc/ssl/tls/certs. This is the root certificate that was used when creating the certificate for the Docker registry. In the error message, this is the certificate for the CA, which is referenced at the end: "192.168.100.174: hostname".

      posted in Continuous Integration and Delivery (CI
      F
      fbio