Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. simrah
    S
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    simrah

    @simrah

    0
    Reputation
    30062
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    simrah Follow

    Best posts made by simrah

    This user hasn't posted anything yet.

    Latest posts made by simrah

    • RE: Missing some subscriptions in Azure DevOps UI when using automatic service principal

      Is there a particular reason you can't just use the manual SP approach? I have also had issues in the past using the automatic flow as well, so I usually just add in my SP creds and get on with it rather than hope all my default subscriptions have been exposed for each tenant etc.

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • How do I make my AMD GPU available within a docker image based on python:3.9.10

      I'd like to do some machine learning on my AMD 6800 XT gpu within a python image based on python:3.9.10. I can confirm that https://superuser.com/questions/1737193/amd-radeon-rx-6800-xt-not-visible-within-debian-wsl2/1737384 (in a wsl2 instance). However, if I do docker run -it python:3.9.10 /bin/bash and then complete the same tutorial ( https://docs.microsoft.com/en-us/windows/ai/directml/gpu-tensorflow-wsl#install-the-tensorflow-with-directml-package ) it doesn't work:

      (directml) root@8a8274e5337f:/# python
      Python 3.6.13 |Anaconda, Inc.| (default, Jun  4 2021, 14:25:59)
      [GCC 7.5.0] on linux
      Type "help", "copyright", "credits" or "license" for more information.
      >>> import tensorflow.compat.v1 as tf
      >>> tf.enable_eager_execution(tf.ConfigProto(log_device_placement=True))
      >>> print(tf.add([1.0, 2.0], [3.0, 4.0]))
      2022-08-18 12:29:39.540717: W tensorflow/stream_executor/platform/default/dso_loader.cc:108] Could not load dynamic library 'libdirectml.0de2b4431c6572ee74152a7ee0cd3fb1534e4a95.so'; dlerror: libd3d12.so: cannot open shared object file: No such file or directory
      2022-08-18 12:29:39.540760: W tensorflow/core/common_runtime/dml/dml_device_cache.cc:137] Could not load DirectML.
      2022-08-18 12:29:39.540793: I tensorflow/core/common_runtime/dml/dml_device_cache.cc:250] DirectML device enumeration: found 0 compatible adapters.
      2022-08-18 12:29:39.541010: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
      2022-08-18 12:29:39.545145: I tensorflow/core/common_runtime/eager/execute.cc:571] Executing op Add in device /job:localhost/replica:0/task:0/device:CPU:0
      tf.Tensor([4. 6.], shape=(2,), dtype=float32)
      

      This article has led me to think that perhaps docker doesn't support AMD GPUs at all: https://docs.docker.com/desktop/windows/wsl/

      Can anyone suggest what I might be able to do to get this to work?

      Note that the reason I have picked this image is that my environment is based on a rather lengthy Dockerfile inheriting from python:3.9.10, and I'd like to keep using that image on the PC with the GPU as well as other (nvidia) environments, so I'm after a portable solution as far as the image is concerned, although I'd be grateful for any solution at this point.

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • What is the best way to reverse port forward information from a Kubernetes cluster to localhost?

      I am attempting to reverse port forward information from a Kubernetes pod 192.168.49.2:30085 to my locoalhost:8000 but I am having issues.

      The goal is to get the same response as curl 192.168.49.2:30085 on localhost:8000

      One thing I have tried has been using https://github.com/omrikiei/ktunnel with the command ktunnel expose 30085:8000 to try to forward this info. Doing this I get the output

      INFO[0000] Exposed service's cluster ip is: 10.101.77.99
      .INFO[0000] waiting for deployment to be ready
      .....
      INFO[0001] port forwarding to https://192.168.49.2:8443/api/v1/namespaces/default/pods/thisisatest-59d5584c84-b5qrc/portforward
      INFO[0001] Waiting for port forward to finish
      INFO[0001] Forwarding from 127.0.0.1:28688 -> 28688
      INFO[2022-08-03 20:23:48.605] starting tcp tunnel from source 30085 to target 8000
      

      Which seems normal and I am able to get a response when using curl http://10.101.77.99:30085 on local host 8000 but not the correct response.

      I have also attempted to run a tcp server with the https://webapp.io/blog/container-tcp-tunnel/ nc 127.0.0.1 8000 | kubectl exec -i tcpserver 127.0.0.1 30085 cat But am having poor results as well. Am I using these services incorrectly? Or is there a better way to do this all together?

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • Is it possible to create multiple tags out from docker-compose?

      While creating a CI/CD via Azure Devops, my goal is to push two tags to the artifactory:

      1. latest
      2. build version (For example 1.0.X)

      My docker-compose looks like:

      version: '3.4'
      

      services:
      mycontainer:
      image: ${Image}:${Tag}
      build:
      context: .
      dockerfile: */Dockerfile
      rabbitmq:
      ...

      The yml related steps are:

      variables:
      - name: Tag
        value: '1.0.1'
      - name: 'Image'
        value: 'myartifactory/myrepo/mycontainer'
      
      • task: DockerCompose@0
        displayName: 'Docker-Compose up data source'
        inputs:
        dockerComposeFile: 'docker-compose.yml'
        dockerComposeFileArgs: |
        Image=$(Image)
        Tag=$(Tag)
        action: 'Run a Docker Compose command'
        dockerComposeCommand: 'up --build -d'

      This step successfully creates this result and I'm able to successfully push the 1.0.1's tag to artifactory:

      Successfully built ############
      Successfully tagged myartifactory/myrepo/myproject:1.0.1 
      
      • task: ArtifactoryDocker@1
        displayName: 'Push tagged image to artifactory'
        inputs:
        command: 'push'
        artifactoryService: '...'
        targetRepo: '...'
        imageName: '$(DockerImage):$(Tag)'

      Now, to push the latest I need to copy paste these two steps again:

      - task: DockerCompose@0
        displayName: 'Docker-Compose up data source'
        inputs:
          dockerComposeFile: 'docker-compose.yml'
          dockerComposeFileArgs: |
            Image=$(Image)
            Tag=latest
          action: 'Run a Docker Compose command'
          dockerComposeCommand: 'up --build -d'
      
      • task: ArtifactoryDocker@1
        displayName: 'Push tagged image to artifactory'
        inputs:
        command: 'push'
        artifactoryService: '...'
        targetRepo: '...'
        imageName: '$(DockerImage):latest'

      Is it possible to tag the latest tag together with the 1.0.1 tag and push them together? Because docker-compose up the project with latest again takes a lot of time.

      I'm trying also to avoid using script step to run docker tag ... to re-tag the image.

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • Transferred 0 file(s) while transferring war file from Jenkins server to remote server

      There's a jenkins server where my jenkinswar.war file is present in /var/lib/jenkins/workspace/project2/target/jenkinswar.war

      I want this file to be transferred to other remote server at /opt/docker location.

      So I have configured as below:

      enter image description here

      Even though the build is successful, still the files transferred are 0.

      enter image description here

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • RE: How can I unevenly distribute pods across Instance Groups

      you are going the right way with PodTopologySpreadConstraints(PTSC for short) to have different instance of the same pod in an uneven distribution through all your k8s nodes. On that note, I would remind you that you can pair various PTSC rules to achieve your desired goal. Based on instance type labels as well as per availability zone.

      But what is missing, and could be a solution to your problem, is to incrementally send a slice of your traffic to those pods in order to do a comparison in performance or whatever you want to compare.

      You should look into what your ingress controller can manage. For instance, NGINX ingress-controller uses https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary and Traefik ingress-controller uses https://traefik.io/glossary/kubernetes-deployment-strategies-blue-green-canary/ to send a certain percentage of traffic to those "test" pod and once you have your metrics you can go on and complete the rollout.

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • Is the forking git workflow used outside of open source projects?

      Here is the https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow if you're not familiar with it.

      The company I work for is mostly comprised of historically open source developers. This has caused them to be stuck on the forking workflow and they're not extremely willing to move to the github flow. Since there's no support for Jenkins to pick up forks of a repo on stash, build and test automation is very awkward and doesn't really work. My question is: Is there a standard way of supporting the forking workflow, or is it just not realistic for a team to develop this way?

      I would really like to move away from the forking workflow, but I have yet to see other companies utilize it. If there's a good way to support it then that would be fine. I just don't know what that would look like from a DevOps perspective. Separating into isolated forks doesn't seem to be as productive and working on a single repo together.

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • K8S + HELM. Create a persistence volume for mysql database

      I am using K8S with Helm 3.

      I am also using MYSQL 5.7 database.

      How can I create MYSQL pod, that the database will persist (created on first time) after first time the pod is created, and even the pod is down and up again, the data won't be lost?

      I am using PersistentVolume and PersistentVolumeClaim.

      Here are the YAML files:

      Mysql pod:

      apiVersion: v1
      kind: Pod
      metadata:
        name: myproject-db
        namespace: {{ .Release.Namespace }}
        labels:
          name: myproject-db
          app: myproject-db
      spec:
        hostname: myproject-db
        subdomain: {{ include "k8s.db.subdomain" . }}
        containers:
          - name: myproject-db
            image: mysql:5.7
            imagePullPolicy: IfNotPresent
            env:
              MYSQL_DATABASE: test
              MYSQL_ROOT_PASSWORD: 12345
            ports:
              - name: mysql
                protocol: TCP
                containerPort: 3306
            resources: 
              requests: 
                cpu: 200m
                memory: 500Mi
              limits:
                cpu: 500m
                memory: 600Mi
            volumeMounts:
              - name: mysql-persistence-storage
                mountPath: /var/lib/mysql
        volumes:
          - name: mysql-persistence-storage
            persistentVolumeClaim:
              claimName: mysql-pvc
      

      Persistent Volume:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: mysql-pv
        labels:
          type: local
          name: mysql-pv
      spec:
        capacity:
          storage: 5Gi
        accessModes:
          - ReadWriteOnce
        hostPath:
          path: "/data/mysql"
      

      Persistent Volume Claim:

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mysql-pvc
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi
        volumeMode: Filesystem
        volumeName: mysql-pv
      

      Also, created a storage class. It is not used, but here it is:

      Storage Class:

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: mysql-storage
      provisioner: docker.io/hostpath
      volumeBindingMode: WaitForFirstConsumer
      

      after running:

      Helm install myproject myproject/

      The database is created, and can be used.

      If I add records and stop and remove the database pod, I would like that the records will be kepts - no loss of db.

      Instead, I see that the db data is lost when the MYSQL pod is restarted.

      What can I do in order to use helm install ... or helm upgrade (it is important that the pod created by helm command) and only after first time it will create the database, and for the next times the database data won't be lost?

      Thanks.

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • Unable to use GCP Cloud Armor due to insufficient quota

      I'm trying to use "Cloud Armor".

      However when trying to create some config (I believe it was the security police) in the console I get an error of:

      Operation type [insert] failed with message "Quota 'SECURITY_POLICIES' exceeded. Limit: 0.0 globally."

      When I try to increase the quota:

      Based on your service usage history, you are not eligible for quota increase at this time. Please make use of the current available quota, and if additional resources are needed, please contact our Sales Team ( https://cloud.google.com/contact/ ) to discuss further options for higher quota eligibility.

      Upon doing a request for a quota increase I got an email back stating:

      Dear Developer,

      We have reviewed your request for additional quota. Unfortunately we cannot grant your request.

      While evaluating quota requests, we consider several factors, including resources that most legitimate customers use, a customer’s previous usage and history with Google Cloud Platform. In addition to this, we may deny a request for additional quota when there are outstanding issues with a billing account for billing quota or paid services related quota requests.

      So now what do I do? Any ideas? Why is using this so restrictive while cloudflare is much less trouble to setup...

      posted in Continuous Integration and Delivery (CI
      S
      simrah
    • Should a pipeline fail if early termination is desired?

      Normally a pipeline breaks down into stages such that each stage has one or more jobs. In order to stop the pipeline from running conditionally, you simply fail that stage of the pipeline.

      For example let's assume I have a pipeline with four stages,

      [Build] -> [Test] -> [Version Bump] -> [Publish]
      

      Now, let's assume the Version Bump may or may not bump the version number which means the Publish stage may or may not run. In order to model this in GitLab we would normally fail the Version Bump stage at the point that we know the Publish should not run. But one of my co-workers does not like the term "fail" here.

      Given the above circumstance, is it a normal flow to have a stage pass unconditionally, and to have a subsequent stage conditionally execute?

      Or, is the pipeline failing just the way in which a pipeline should conditionally terminate?

      posted in Continuous Integration and Delivery (CI
      S
      simrah