Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Alberto
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Alberto

    @Alberto

    2
    Reputation
    29783
    Posts
    2
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Alberto Follow

    Best posts made by Alberto

    • Test for Django REST API framework

      There is my functional test

      from rest_framework import status
      from rest_framework.test import APITestCase
      
      __all__ = ['OAuthTestCase']
      
      ADDRESS_FOR_TEST = 'http://127.0.0.1:8000/api/v1/request/'
      
      
      class ApiOAuthTestCase (APITestCase):
      
           def setUp (self):
               User.objects.create_user (username = user_name, password = user_password)
      
               self.my_message = {
                   'grant_type': 'password',
                   'username': "user_name",
                   'password': "user_password",
                   'client_id': "secret_ket",
                   'client_secret': "clietn_id",
               }
      
           def test_token (self):
               response = self.client.post (ADDRESS_FOR_TEST, self.my_message)
               self.assertEqual (response.status_code, status.HTTP_201_CREATED)
      

      When passing this test, error 400 appears. Can you please tell me how to access the response content from response?

      posted in API Testing
      Alberto
      Alberto
    • RE: What tool do you use for Rest API testing?

      POSTMAN, Soap UI are the best

      posted in API Testing
      Alberto
      Alberto

    Latest posts made by Alberto

    • Unable to login as `ubuntu` user on ec2 instance spawned from auto scaling group

      Utilizing Ansible AWS modules, I'm creating an AMI from an existing EC2 instance where I am able to ssh with both my user and default account (ubuntu). After the AMI is in a ready state, I then create a launch template with the new AMI and an autoscaling group that leverages that launch template. Once the instance from the autoscaling group is stood up, I am only able to ssh with the user account, but not the default name. The key_name used for the first instance and the launch template are identical. The /etc/ssh/sshd_config file is also identical between the first instance and the autoscaled instance. The two instances also use the same security groups with port 22 accepting ssh traffic. I assume there might be some data lost during the AMI creation event, but I'm not sure. Any and all help would be appreciated and I'd be happy to provide more information if needed. Thank you!

      - name: Create a sandbox instance
        hosts: localhost
        become: False
        gather_facts: False
        tasks:
          - name: Launch instance
            ec2_instance:
              key_name: "{{ keypair }}"
              security_group: "{{ security_group }}"
              instance_type: "{{ instance_type }}"
              image_id: "{{ image }}"
              wait: true
              region: "{{ region }}"
              vpc_subnet_id: "{{ vpc_subnet_id }}"
              volumes:
                - device_name: /dev/sda1
                  ebs:
                    volume_size: 50
                    delete_on_termination: true
              network:
                assign_public_ip: true
              tags:
                tmp: instance
            register: ec2
      
      - name: Debug EC2 variable availability
        ansible.builtin.debug:
          msg: ec2 instance {{ ec2.instances[0].network_interfaces[0].private_ip_address }}
      
      - name: Add new instance to host group
        add_host:
          hostname: "{{ ec2.instances[0].network_interfaces[0].private_ip_address }}"
          groupname: launched
      
      - name: Wait for SSH to come up
        delegate_to: "{{ ec2.instances[0].network_interfaces[0].private_ip_address }}"
        remote_user: "{{ bootstrap_user }}"
        wait_for_connection:
          delay: 60
          timeout: 320
      
      • name: Configure instance
        hosts: launched
        become: True
        gather_facts: True
        remote_user: "{{ bootstrap_user }}"
        roles:

        • app_server
        • ruby
        • nginx
        • Datadog.datadog
          vars:
          datadog_checks:
          sidekiq:
          logs:
          - type: file
          path: /var/log/sidekiq.log
          source: sidekiq
          service: sidekiq
          tags:
          - "env:{{rails_env}}"
      • hosts: launched
        become: yes
        gather_facts: no
        remote_user: "{{ user_name }}"
        become_user: "{{ user_name }}"

        Need to set this hostname appropriately

        pre_tasks:

        • name: set hostname
          set_fact: hostname="sidekiq"
          roles:
        • deploy_app
      • hosts: launched
        become: yes
        gather_facts: no
        remote_user: "{{ bootstrap_user }}"
        roles:

        • sidekiq
      • name: Generate AMI from newly generated EC2 instance
        hosts: localhost
        gather_facts: False
        pre_tasks:

        • set_fact: ami_date="{{lookup('pipe','date +%Y%m%d%H%M%S')}}"
          tasks:

        • name: Debug EC2 instance variable availability
          ansible.builtin.debug:
          msg: EC2 Instances {{ ec2.instances }}

        • name: Create AMI
          ec2_ami:
          instance_id: "{{ ec2.instances[0].instance_id }}"
          name: "sidekiq_ami_{{ ami_date }}"
          device_mapping:
          - device_name: /dev/sda1
          size: 200
          delete_on_termination: true
          volume_type: gp3
          wait: True
          tags:
          env: "{{ rails_env }}"
          register: ami

        - name: Terminate instances that were previously launched

        ec2:

        state: "absent"

        instance_ids: "{{ ec2.instances[0].instance_id }}"

        region:

        • name: Debug AMI variable availability
          ansible.builtin.debug:
          msg: AMI {{ ami }}

        • name: Create an ec2 launch template from new Sidekiq AMI
          ec2_launch_template:
          template_name: "sidekiq_launch_template_{{ rails_env }}"
          image_id: "{{ ami.image_id }}"
          key_name: "{{ keypair }}"
          instance_type: "{{ instance_type }}"
          disable_api_termination: true
          block_device_mappings:
          - device_name: /dev/sda1
          ebs:
          volume_size: 200
          volume_type: gp3
          delete_on_termination: true
          network_interfaces:
          - device_index: 0
          associate_public_ip_address: yes
          subnet_id: "{{ subnet_id }}"
          groups: ["{{ security_group }}"]
          user_data: "{{ '#!/bin/bash\nsudo systemctl sidekiq.service restart' | b64encode }}"
          register: template

        Rolling ASG update with new launch template

        • name: Rolling update of the existing EC2 instances
          ec2_asg:
          name: "sidekiq_autoscaling_group_{{ rails_env }}"
          availability_zones:
          - us-west-1a
          launch_template:
          launch_template_name: "sidekiq_launch_template_{{ rails_env }}"
          health_check_period: 60
          health_check_type: EC2
          replace_all_instances: yes
          min_size: "{{ min_size }}"
          max_size: "{{ max_size }}"
          desired_capacity: "{{ desired_capacity }}"
          region: "{{ region }}"
          tags:
          - env: "{{ rails_env }}"
          Name: "{{ rails_env }}-sidekiq"
          vpc_zone_identifier: ["{{ subnet_id }}"]
          register: asg
      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • What is limit of runs does Azure Devops pipeline keeps?

      What is the limit, and where do you increase the default setting? enter image description here

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • RE: How to compile Latex with Github Actions

      Uploading an artifact lets it be accessed by a future github action run. It doesn’t commit and push the file to the repo.

      See the docs for how to manually retrieve artifacts: https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts

      You might want to look into services like ReadTheDocs or github pages, as potential places to automatically publish the output from your latex build. E.g., https://docs.github.com/en/pages/getting-started-with-github-pages/configuring-a-publishing-source-for-your-github-pages-site .

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • Stage Parallelization in Jenkins declarative pipelines

      I am Trying to get a Jenkins (2.204.2) declarative pipeline to run parallel stages generated into a map on different machines. I am aware it can be done by mixing in the script block and i have done it that way in the past, but from documentation and other stack____ questions I cannot figure out why this format does not work.

      I have stripped everything down as far as possible and I am trying to just create the map statically outside the pipeline to figure out the syntax I need.

      #!/usr/bin/env groovy
      

      def map = [:]
      map['Passed Stage'] = {
      stage("Passed Map Inner Stage"){
      agent{ label "nodeLabel" }
      steps{
      echo "PASSED MAP STEP"
      }
      }
      }

      pipeline {
      agent { label "nodeLabel" }
      stages {
      stage("Non map stage"){
      agent{ label "nodeLabel" }
      steps{
      echo "NON MAP STEP"
      }
      }
      stage("Direct Map Outer Parallel Stage"){
      parallel{ direct :
      stage("Direct Map Inner Stage"){
      agent{ label "nodeLabel" }
      steps{
      echo "DIRECT MAP STEP"
      }
      }
      }
      }
      stage("Passed Map Outer Parallel Stage"){
      parallel{ map }
      }
      }
      }

      The first two stage methods work if I comment out the mapped one but "Passed Map Outer Parallel Stage" always fails with:

      Running in Durability level: MAX_SURVIVABILITY
      org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
      WorkflowScript: 33: Expected a stage @ line 33, column 23.
                     parallel{ map }
                               ^
      

      WorkflowScript: 33: No stages specified @ line 33, column 13.
      parallel{ map }
      ^

      WorkflowScript: 33: No stages specified @ line 33, column 13.
      parallel{ map }
      ^

      3 errors

      at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
      at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
      at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
      at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
      at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
      at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
      at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
      at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
      at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
      at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:142)
      at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:127)
      at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:561)
      at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:522)
      at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:327)
      at hudson.model.ResourceController.execute(ResourceController.java:97)
      at hudson.model.Executor.run(Executor.java:427)
      

      Finished: FAILURE

      The stage format seems fine from everything Ive spent reading all day, and passing the same stage directly to parallel outside the map works fine...

      What am I missing here? why wont parallel accept my map? Does declarative parallel only accept stages statically passed? is my Jenkins version too low?

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • Gather kubectl logs data to an external service

      When I run kubectl logs MyPodNameHere I get back the standard-out of that pod.

      My company has a central logging service for all our applications and I would like to send the logs output to that service. (Via an HTTPS endpoint.)

      But I can't seem to find a way to hook into a Kubernetes API to get that working. Is there a Kubernetes API or other integration point to do this?

      NOTE: I am also using Istio. I had hopped that I could use that for this, but I can only see stuff about sending "access logs" in the documnation on Istio. But it seems to me that it should be possible via Istio as well.

      Anyone know a way to get this logs data sent to an HTTPS endpoint?

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • RE: build pipeline with repository: is it advisable to build both on repo and end server

      Usually, your pipeline would produce a docker image as an artifact. For a feature branch, it might be tagged with the branch name.

      For changes to the default branch (main/master), you should tag the image with some kinda version string. Typically a https://semver.org/ string.

      You should not be copying the repo or git pulling the repo from the server. This is because the docker build is not guaranteed to be reproducible. Meaning that tests that passed in the pipeline might all of a sudden fail on the server.

      Instead, after you have tested the docker image you should push the docker image to a repository and then pull that image to the application server when you want to deploy it.

      You shouldn't install or run any development tooling on your production server.

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • OpenStack API Access over secondary ethernet connection or NIC

      I am trying to configure my server to utilize a secondary NIC card. I want the server to publish API access on both NICs at the same time. See the attached picture for a configuration diagram that shows that each server has two NICs, one configured with Internet access and the other wired to another server through a network switch.

      Shows the network configuration of my machines. How do I make OpenStack utilize both NICs at the same time?

      I've seen the multi-nic feature in OpenStack, but that seems to link OpenStack Networks together, not physical NICs. Another option I've looked into is using the routing tables, but I don't know if this is possible/best practice.

      Thanks,

      Taylor

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • RE: Azure Web Apps Serves Old Files

      I discovered two issues that hindered the pipeline deploying properly.

      First, it appears that the Azure plugin for Eclipse creates an app.war file in the wwwroot folder of the app. Azure, apparently, prioritizes picking up this war, instead of any that might be in wwwroot/webapps.

      Previously I had manually deleted app.war, but then the app would just 404. This leads to the second issue. I was attempting to deploy using the default package name "...SNAPSHOT"; however, Tomcat looks for ROOT.war. I added the following lines to my pom.xml.

      
             ROOT
      
      

      This delivered the appropriately named file, which the "Azure App Service deploy" task places in wwwroot/webapps (where Tomcat looks for it). I then deleted the app.war file and any leftover ...SNAPSHOT folders in webapps, and restarted the app. It booted up with the war from the pipeline, as intended.

      I will not be using the Azure plugin for Eclipse any longer, as it appears incompatible with Azure DevOps pipelines.

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • How to connect from Docker container to the host?

      My problem is that im trying to connect to a Docker host(Im running on Ubuntu) from a container. Ive tried several solutions including adding extra_hosts: host.docker.internal:host-gateway, but still im not able to establish the connection. Im running docker with docker-compose up.

      So for example when i run script the requests.get("http//:host.docker.internal:8000") in the container then receive an error

      requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))
      

      .

      This is my docker-compose.yml

       version: 3.7
      

      services:
      web:
      container_name: schedule_model_service-api
      build: .
      ports:
      - "8001:8001"
      command: startserver
      volumes:
      - ./:/app
      env_file:
      - .env
      depends_on:
      - postgres
      - redis
      extra_hosts:
      - "host.docker.internal:host-gateway"

      worker:
      container_name: schedule_model_service-worker
      build: .
      command: celeryworker
      volumes:
      - ./:/app
      env_file:
      - .env
      depends_on:
      - web
      - postgres
      - redis

      redis:
      container_name: schedule_model_service-redis
      image: redis:6.2.6
      volumes:
      - ./docker-data/redis:/var/lib/redis
      env_file:
      - .env
      restart: on-failure

      postgres:
      container_name: schedule_model_service-db
      image: postgres:14.1
      volumes:
      - ./docker-data/postgresql:/var/lib/postgresql/data
      env_file:
      - .env
      restart: on-failure```

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto
    • Container deployment sync problem with Ansible

      Imagine the following situation after you have switched from docker-compose to Ansible:

      • You start a DB container which needs time to boot up
      • You want to load data dump into it

      (could be also you need to run tests after service containers are up)

      How to tell Ansible that one task depends on finishing another one? (that is, also, Ansible needs to support a condition so that you tell how a task is done or not, if it's not default in a specific module)

      posted in Continuous Integration and Delivery (CI
      Alberto
      Alberto