Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Raziyah00
    R
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Raziyah00

    @Raziyah00

    2
    Reputation
    29900
    Posts
    7
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Raziyah00 Follow

    Best posts made by Raziyah00

    • RE: Copy and paste using Python commands

      You can use clipboard together with the PyAutoGui.

      Installation of the clipboard: pip install clipboard
      Can be used like this:

      import clipboard
      import pyautogui
      

      pyautogui.doubleClick(290, 150)
      pyautogui.hotkey('ctrl', 'c')
      text = clipboard.paste()
      print(text)

      posted in Software Programming
      R
      Raziyah00
    • RE: Delete an object from within an array using JSONB in PostgreSQL

      To remove key (warning - json/jsonb arrays index starts from 0 unlike pure SQL arrays):

      UPDATE site_content
         SET content #- '{playersContainer,players,0}'::text[];
      

      To remove array element with specific id:

      UPDATE site_content
         SET content = content #- coalesce(('{playersContainer,players,' || (
                  SELECT i
                    FROM generate_series(0, jsonb_array_length(content->'playersContainer'->'players') - 1) AS i
                   WHERE (content->'playersContainer'->'players'->i->'id' = '"2"')
               ) || '}')::text[], '{}');
      
      posted in SQL
      R
      Raziyah00

    Latest posts made by Raziyah00

    • Deploy A War/Ear To Container Marked build As failure When Deploying To Tomcat 9 Server

      [![When I was Deploying A Sample Application On Tomcat9 I Faced This Issue. What Could Be The point I might be missing ?.

      This is the Tomact9 Users File where I added credentials do I need to add anything more,

      This is the Tomcat User XML File

      Is There Any Configuration Which I have do in Tomcat or In Jenkins Can Someone help Me Out With This!] https://i.stack.imgur.com/E78HM.png ] https://i.stack.imgur.com/E78HM.png

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • RE: Multistage docker build for Python distroless image

      You're trying to copy just the /venv directory, but that's not going to work. Take a look at the contents of that directory, for example:

      root@52bdcc57abd8:/# python3 -m venv /venv
      root@52bdcc57abd8:/# ls -l /venv/bin/
      total 48
      -rw-r--r-- 1 root root 8834 Oct 22 03:52 Activate.ps1
      -rw-r--r-- 1 root root 1880 Oct 22 03:52 activate
      -rw-r--r-- 1 root root  829 Oct 22 03:52 activate.csh
      -rw-r--r-- 1 root root 1969 Oct 22 03:52 activate.fish
      -rwxr-xr-x 1 root root  222 Oct 22 03:52 pip
      -rwxr-xr-x 1 root root  222 Oct 22 03:52 pip3
      -rwxr-xr-x 1 root root  222 Oct 22 03:52 pip3.9
      lrwxrwxrwx 1 root root    7 Oct 22 03:51 python -> python3
      lrwxrwxrwx 1 root root   22 Oct 22 03:51 python3 -> /usr/local/bin/python3
      lrwxrwxrwx 1 root root    7 Oct 22 03:51 python3.9 -> python3
      

      As you can see /venv/bin/python3 isn't actually a file, it' just a symlink to whichever version of Python was used to create the virtual environment.

      You won't get away with just copying the binary, either, because you need the full set of runtime files from /usr/local/lib/python3.9, as well as any shared libraries that are by required by Python.


      But when replacing base image with debian:11-slim everything works as expected.

      In this case, you're installing a compatible version of Python...but if you're going to do that, you might as well just stick with the python:3.9-slim image.

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • RE: Stage Parallelization in Jenkins declarative pipelines

      The Scripted version of parallel and the Declarative version of parallel are different functions. The Scripted version takes a Map as an argument, whereas the Declarative version expects a block with calls to the stage function within that block.

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • RE: Sharing volumes between pods on different clusters

      Can you share a PV, between clusters: in theory yes. You can mirror one PV object into another cluster, given you have similar storage classes and accesses to your storage backend.

      Sharing PV between Pods, within one or more clusters: depends. First, on your storage backend. There may be some locking mechanism that would prevent you from attaching the same volume to more than one client at a time.

      In addition of which: if your storage backend provides with block devices, your volumes would involve some file system, that may not support being mounted on two systems at once.

      To work around those two points above, we could consider something like NFS, CephFS, EFS (aws), a samba share, ... Next, it depends on your application: what are the risks of those two Pods of yours writing the same file at once, and corrupting your data?

      Short answer: in theory yes, you could. Although if possible: try to question this requirement. Sounds unusual, may be something wrong with design/architecture.

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • RE: What can Terraform be used to configure for hosting a web application?

      I think it is better if you containerize your flask application. So:

      1. Containerize your flask application.
      2. Create and configure an EC2 server to deploy your application container on it.

      1. Containerize your flask application.

      I'm not gonna walk you through this but basically you should create a Dockerfile and create an image based on that. then push it to any repo you like (e.g. https://hub.docker.com/ ).


      1. Create and configure an EC2 server to deploy your application container on it.

      Here is an example that will create and config an EC2 server, install Docker on it and then run an container from your image (in this example: Nginx)

      provider "aws" {
        region = "eu-central-1"
      }
      

      variable vpc_cidr_block {}
      variable subnet_1_cidr_block {}
      variable avail_zone {}
      variable env_prefix {}
      variable instance_type {}
      variable ssh_key {}
      variable my_ip {}

      data "aws_ami" "amazon-linux-image" {
      most_recent = true
      owners = ["amazon"]

      filter {
      name = "name"
      values = ["amzn2-ami-hvm-*-x86_64-gp2"]
      }

      filter {
      name = "virtualization-type"
      values = ["hvm"]
      }
      }

      output "ami_id" {
      value = data.aws_ami.amazon-linux-image.id
      }

      resource "aws_vpc" "myapp-vpc" {
      cidr_block = var.vpc_cidr_block
      tags = {
      Name = "${var.env_prefix}-vpc"
      }
      }

      resource "aws_subnet" "myapp-subnet-1" {
      vpc_id = aws_vpc.myapp-vpc.id
      cidr_block = var.subnet_1_cidr_block
      availability_zone = var.avail_zone
      tags = {
      Name = "${var.env_prefix}-subnet-1"
      }
      }

      resource "aws_security_group" "myapp-sg" {
      name = "myapp-sg"
      vpc_id = aws_vpc.myapp-vpc.id

      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"
      cidr_blocks = [var.my_ip]
      }

      ingress {
      from_port = 8080
      to_port = 8080
      protocol = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      }

      egress {
      from_port = 0
      to_port = 0
      protocol = "-1"
      cidr_blocks = ["0.0.0.0/0"]
      prefix_list_ids = []
      }

      tags = {
      Name = "${var.env_prefix}-sg"
      }
      }

      resource "aws_internet_gateway" "myapp-igw" {
      vpc_id = aws_vpc.myapp-vpc.id

      tags = {
       Name = "${var.env_prefix}-internet-gateway"
      

      }
      }

      resource "aws_route_table" "myapp-route-table" {
      vpc_id = aws_vpc.myapp-vpc.id

      route {
      cidr_block = "0.0.0.0/0"
      gateway_id = aws_internet_gateway.myapp-igw.id
      }

      default route, mapping VPC CIDR block to "local", created implicitly and cannot be specified.

      tags = {
      Name = "${var.env_prefix}-route-table"
      }
      }

      Associate subnet with Route Table

      resource "aws_route_table_association" "a-rtb-subnet" {
      subnet_id = aws_subnet.myapp-subnet-1.id
      route_table_id = aws_route_table.myapp-route-table.id
      }

      resource "aws_key_pair" "ssh-key" {
      key_name = "myapp-key"
      public_key = file(var.ssh_key)
      }

      output "server-ip" {
      value = aws_instance.myapp-server.public_ip
      }

      resource "aws_instance" "myapp-server" {
      ami = data.aws_ami.amazon-linux-image.id
      instance_type = var.instance_type
      key_name = "myapp-key"
      associate_public_ip_address = true
      subnet_id = aws_subnet.myapp-subnet-1.id
      vpc_security_group_ids = [aws_security_group.myapp-sg.id]
      availability_zone = var.avail_zone

      tags = {
      Name = "${var.env_prefix}-server"
      }

      user_data = <<EOF
      #!/bin/bash
      apt-get update && apt-get install -y docker-ce
      systemctl start docker
      usermod -aG docker ec2-user
      docker run -p 8080:8080 nginx
      EOF
      }


      Sources:

      • https://registry.terraform.io/modules/terraform-aws-modules/ec2-instance/aws/latest
      • https://gitlab.com/nanuchi/terraform-learn/-/tree/feature/deploy-to-ec2
      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • RE: gitlab-runner docker: command not found

      Do you know for sure if Docker is installed on the machine/VM that has the runner on it? Depending on your setup, you can SSH into the machines that host the Gitlab runners and just do docker --version.

      If it's not on there, then you will need to visit https://docs.docker.com/engine/install/ and add it.

      I ran into the same situation, coming into a project that used GitLab runners with no prior experience - at the end of the day, a GitLab runner is just a program that can run on any machine. Depending on how your CI/CD is configured, it may be on a development VM, server, workstation, etc. Good luck!

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • Cloudformation template with EC2 using docker compose

      I'm relatively new to Cloudformation, and had a few questions about my template and best practices. I'm facing a few hurdles, and there is a lot of information out there, it's a bit overwhelming, so any feedback would be highly appreciated. I'm not looking for detailed code etc. just some good insights on how I can improve my steps.

      I'm trying to set up a basic Node/Express API:

      • On push to Git repo
        • Build Docker image and push to private AWS ECR repo
        • After successful push, deploy Cloudformation template that provisions
          • An EC2 + security group with Elastic IP assigned
          • Run docker compose in Userdata of EC2 to get app up and running

      This is my UserData (I do need some specific help here!)

      UserData: !Base64 |
              #!/bin/bash -ex
              yum update -y
              yum install docker -y
              service docker start
              usermod -a -G docker ec2-user
              echo "Start Docker service"
              apt-get update
              apt-get install docker-compose-plugin
              apt install amazon-ecr-credential-helper
              echo "APT-GET update complete"
              echo "{ \"credHelpers\": { \".dkr.ecr..amazonaws.com\": \"ecr-login\" } }" > ~/.docker/config.json
              systemctl restart docker
              echo "
              version: "3.9"
              services:
                my-app:
                  image: .dkr.ecr..amazonaws.com/my-repo
                  environment:
                    STAGE: staging
                    VIRTUAL_HOST: my-customdomain.com
                    VIRTUAL_PORT: 3000
                  ports:
                    - "3000:3000"
                  restart: always
                  networks:
                    - my-network
      
            https-portal:
              image: steveltn/https-portal:1
              ports:
                - '80:80'
                - '443:443'
              links:
                - my-app
              environment:
                STAGE: production
              volumes:
                - /var/run/docker.sock:/var/run/docker.sock:ro
              networks:
                - my-network
      
          volumes:
            https-portal-data:
          networks:
            my-network:
              driver: bridge
          " > docker-compose.yaml
          docker compose up -d
      

      Status: Cloudformation template deploys successfully, all resources set up. But the Userdata doesn't run, so my EC2 never sets up my app.

      Issues / Questions:

      • The Userdata never ran, I can't find see any of the above echo statements in the logs /var/log/cloud-init.log. When I SSH into the instance I can't find any of these files. How do I debug this better?
      • Is there a better way to get the docker-compose data in there? writing the whole file in the UserData script seems inefficient. Is there a better way to do this?
      • On code update, Cloudformation stack is updated, this does not run Userdata(?) (I know it only runs when an instance is first created, but I would like some confirmation that Cloudformation update does not trigger this.
      • What is the best practice here if I want to re-run docker compose in my Ec2 after every Cloudformation deploy? If it does trigger Userdata, what could be wrong here?
      • Is this an ideal flow? Are there any improvements I can make here, considering I'm not an expert, but willing to spend some time learning where required.

      I appreciate anyone taking the time to answer these questions. Thanks!

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • Checkout specific ref in Azure Pipeline from private GitHub

      I'm facing a problem when trying to use a parameter in my resources.

      parameters:
      - name: MyVersion
        default: "0.0.0"
      

      resources:
      repositories:

      • repository: other
        type: github
        name: mycompany/project
        #Ref with parameter is not allowed
        ref: refs/tags/tag_${{parameters.MyVersion}}
        endpoint: mygithub

      While https://stackoverflow.com/questions/72400534/checkout-different-repository-as-per-input-in-azure-pipeline could potentially help, it doesn't work for me because I'm using GitHub. How do I do the same thing with GitHub?

      I tried

      - checkout: git://mycompany/project@refs/tags/tag_${{parameters.MyVersion}}
      

      But it tells me "mycompany" doesn't exist. Which makes sense because mycompany is on GitHub and not in Azure DevOps. How do I help Azure DevOps find it? Also, this is a private project that requires authentication for checkout.

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • RE: Jenkins trigger the 2nd job when the first job fails

      Use the Parameterized Trigger ( https://plugins.jenkins.io/parameterized-trigger/ ) in your Post Build Actions. In my example I did not pass any parameters to the new job.

      enter image description here

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00
    • Puppet Notice: This node did not match any of the listed definitions

      I have these definitions for my puppet nodes:

      Nginx node:

      node 'nginx.XXXXXXXX.org' {
        package {"epel-release":
          ensure => "installed",
        }
        package {"nginx":
          ensure => "installed",
        }
        file {lookup("web_dirs"):
          ensure => "directory",
        }
      }
      

      Mongodb node:

      node 'mongodb.XXXXXXXX.org' {
        package { 'mongodb':
          ensure => 'installed',
        }
        service {'mongodb':
          ensure => 'running',
          enable => true,
        }
        file { '/tmp/create_admin.js':
          content => epp('/etc/puppetlabs/code/environments/production/templates/create_admin.js.epp')
        }
        exec { 'Create the admin user':
          command => "/usr/bin/mongo < /tmp/create_admin.js && touch /home/vagrant/db_admin_created",
          creates  => "/home/vagrant/db_admin_created",
        }
        file { '/etc/mongodb.conf':
          source => "/etc/puppetlabs/code/environments/production/files/mongodb.conf",
          notify => Service["mongodb"],
        }
        file { '/tmp/create_app_user.js':
          content => epp("/etc/puppetlabs/code/environments/production/templates/create_app_user.js")
        }
        exec { "Create the application user":
          command => inline_epp("/usr/bin/mongo -u  -p  --authenticationDatabase admin < /tmp/create_app_user.js && touch /home/vagrant/db_user_created"),
          creates => "/home/vagrant/db_user_created",
        }
        exec { "Removes the JS script files":
          command => "/bin/rm -rf /tmp/create_admin.js /tmp/create_app_user.js",
        }
      }
      

      node default {
      notify { 'This node did not match any of the listed definitions':}
      }

      Node node:

      node 'node.XXXXXXXX.org' {
        package {"epel-release":
          ensure => "installed",
        }
        package {"nodejs":
          ensure => "installed",
        }
        package {"npm":
          ensure => "installed",
        }
        file { lookup('app_dir'):
          ensure   => "directory",
          source   => "/etc/puppetlabs/code/environments/production/files/appfiles/",
          recurse => true
        }
        exec { "Run npm against package.json":
          command => "/usr/bin/npm install",
          cwd     => lookup("app_dir"),
        }
        file {"${lookup('app_dir')}/server.js":
          content => epp("/etc/puppetlabs/code/environments/production/templates/server.js.epp"),
          mode    => "0755"
        }
        file {"/etc/systemd/system/node.service":
          content => epp("/etc/puppetlabs/code/environments/production/templates/node.service.epp"),
        }
        service {"node":
          ensure => "running",
          enable => true,
        }
      }
      

      I have setup a workflow where the master puppet node called puppet host's a git repo from which these three pull manifest configs and apply them locally. Out of all the three nodes the Nginx node produces the following error when I apply a manifest change:

      Notice: Compiled catalog for nginx.mshome.net in environment production in 0.03 seconds
      Notice: This node did not match any of the listed definitions
      Notice: /Stage[main]/Main/Node[default]/Notify[This node did not match any of the listed definitions]/message: defined 'message' as 'This node did not match any of the listed definitions'
      Notice: Applied catalog in 0.02 seconds
      

      I have looked at my definitions and I can't seem to find what is off about the nginx node, any pointers would be appreciated. Also note these machines were created using vagrant.

      posted in Continuous Integration and Delivery (CI
      R
      Raziyah00