Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Analeea
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Analeea

    @Analeea

    1
    Reputation
    30123
    Posts
    4
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    Analeea Follow

    Best posts made by Analeea

    • Types of Security Testing

      What Are the Types of Security Testing?

      posted in Security
      Analeea
      Analeea

    Latest posts made by Analeea

    • RE: Can you delete project binaries from an Azure Devops repo

      Do not store your generated binaries in git. Git stores every version of a binary as a full new version, not just the changes. Your projects must be setup in the way that the binaries can be rebuilt anywhere.

      Then in the pipeline your code is copied to a second directory structure and the projects will build and generate the binaries for you. From there unit tests are done and whatever more is in the pipeline.

      Use git rm to remove them from the repo (optional) git filter-branch to remove them from old commits git gc --prune=now to garbage collect them from the ..git directory

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • Deploying environment secrets to services

      I know I can use CD pipelines to deploy an app to a given environment (dev/stage/prod)

      Given that each environment should have its own environment variables/secrets for each app, how can I streamline the process of securely setting those variables/secrets in each environment without having to ssh into the environment server and create a .env file for the specific app/environment that's being deployed?

      I've heard of KeyVaults but I'm not sure if that's overkill for a single set of environments.

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • RE: How to keep the overview over the entire lifecycle of backlog items

      There is no single, perfect solution that I've found. The default work item statuses are pretty sparse, and ill-defined, but they are meant to be super generic to apply in the maximum number of use cases. Yes, people say you shouldn't add statuses. I think those people are wrong, and partially right.

      Each work item broadly goes through these phases:

      1. New: someone just threw it in the backlog. Needs lots of definition.
      2. Design/Analysis: business and technical requirements get hashed out. Bugs get investigated. Mockups get created. Scrum teams are doing backlog grooming sessions for these items. Clients need to approve stuff.
      3. Estimated: the team has provided an estimate good enough to start work.
      4. Ready: the customer approves of the scope and estimate, and can be started on at the team's convenience.
      5. Active: the team is actively developing or testing an item.
      6. QA Complete: the QA team verified to the best of their ability that the work items meet the acceptance criteria.
      7. In User Acceptance Testing: a customer can start using this new thing.
      8. Approved for Release: the customer wants to proceed with a production deployment.
      9. Released: the team deployed and the customer ensured it made it out.

      Work item states are great for many of these points in the lifecycle, whether doing agile, scrum, kanban, or waterfall. Feel free to name them appropriately for your team's workflow. I have found the Design/Analysis phase has a lot of nuance that isn't easily captured in the "state" of an item.

      Client approvals seems to be a missing "thing" in Azure DevOps. Rational Team Concert by IBM has an Approvals feature for their work items, which is nice. For Azure DevOps, configure the https://learn.microsoft.com/en-us/azure/devops/boards/boards/kanban-basics for your backlog to add columns for these approvals. Azure DevOps tracks these column assignments in the work item history, so provide meaningful names and descriptions for these columns.

      Many times, columns will map directly to work item states. Other times, it won't. This is similar to how I set up our Kanban board. The table header shows the Kanban column names, and underneath shows a corresponding work item state:

      New Design/Analysis Example Mapping Backlog Grooming Estimated Ready Active QA Complete UAT UAT Complete Done
      New Design/Analysis Design/Analysis Design/Analysis Estimated Ready Active QA Complete UAT UAT Done

      The Design/Analysis part has a number of columns to represent that back-and-forth process of defining and refining requirements. Finally, the user acceptance testing side of things needed a little more explicit naming, but from a process perspective we didn't need a separate status. This was arbitrary, and simply worked for us.

      When things like mock ups, UML diagrams, or other design documents must be produced, consider adding Task work items under the relevant story to encompass this work.

      If work item states, Kanban board columns, and Task work items are not sufficient to capture the nuance of your process, you can always resort to tagging work items and creating work item queries.

      To summarize:

      • Add work item states to capture the main points of your lifecycle.
        • Litmus test: switch to the backlog view, highlight items, right-click, copy to clipboard. Paste into a spreadsheet. If the work item states are not sufficient to communicate where the item is in your process to management then add a work item state. Ignore people who recommend using a Kanban board for this.
      • https://learn.microsoft.com/en-us/azure/devops/boards/boards/kanban-basics to add columns beyond what work item states can communicate. This can work well for customer approvals, as long as they are they ones to change the column.
      • Use Task work items when doing things like mock ups and other design documentation.
      • Any other non-standard or wonky edge cases can be captured by tagging work items and developing https://learn.microsoft.com/en-us/azure/devops/boards/queries/using-queries .
      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • How to access elements of a variable in ansible

      I'm trying to access a specific value in a list of data and it fails, can't find the ipa variable.

      Here is a short bit of the boulder.yaml vars file, yes, dual homed. Network configurations require it:

      service:
        control:
        - name: "bldr0cuomdev01"
          ipa: 192.168.101.81
        - name: "bldr0cuomdev02"
          ipa: 192.168.101.82
      management:
        control:
        - name: "bldr0cuomdev11"
          ipa: 10.100.78.81
        - name: "bldr0cuomdev12"
          ipa: 10.100.78.82
      

      And I'm trying to use the following (much shorter) jinja2 template (nodes.j2) to create a file:

      {{ service.control.ipa[0] }},{{ management.control.ipa[0] }}
      

      I'm basically getting the error that ipa is not found.

      Since there are two sections, having it be a loop isn't going to work but if I test just the service section as a loop, it does work. So it's able to find ipa, but my method isn't valid for some reason.

      {% for s in service.control %}
      {{ s.ipa }}
      {% endfor %}
      

      I saw another suggestion of using .0 instead of a bracketed array

      {{ service.control.ipa.0 }},{{ management.control.ipa.0 }}
      

      And also tried using a quoted zero

      {{ service.control.ipa['0'] }},{{ management.control.ipa['0'] }}
      

      But neither work. Searching on the 'net is common enough that my search is not finding anything relevant.

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • RE: How to tell helm not to deploy a resource or remove it if a value is set to "false"?

      You've specified:

      {{ if .Values.hpa }}
      ---
      ...
      

      To skip that code when enabled is false, you'd need:

      {{ if .Values.hpa.enabled }}
      ---
      ...
      
      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • What is the usage of the cluster external IP address?

      I'm confused about the usage of the cluster external IP address.

      Is this an address that can be used for ingress to access pods running on the cluster?

      If so should this be the same as the control plane machine IP address (I only have a single control plane)? Or rather should it be an unused IP address on the subnet that the cluster sits? For example, if I have the below setup:

      Master: 192.168.86.50 Worker 1: 192.168.86.101 Worker 2: 192.168.86.102 Worker 3: 192.168.86.103
      

      Should the external IP address of the master be set to 192.168.86.50 or could I set it to 192.168.86.20 for example?

      Also I notice the workers also can take an external IP address should these be set to the same external IP address as the master? If not say they were 192.168.86.21 192.168.86.22 and 192.168.86.23 would that mean I could reach any pod (with ingress setup) to access it on 192.168.86.20, 192.168.86.21, 192.168.86.22 and 192.168.86.23?

      I've done some reading around it but I'm still struggling to grasp the concept of external IP address.

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • RE: Kubernetes deployment with multiple containers

      I have two containers, worker and dispatcher. I want to be able to deploy N copies of worker and 1 copy of dispatcher to the cluster. Is there a deployment that will make this work?

      Both a Deployment or Statefulset will work.

      dispatcher needs to be able to know about and talk to all of the worker instances (worker is running a web server).

      This has to do with https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/ and will depend on which plugin your cluster was configured with. Most clusters I have used by default allow traffic across the entire cluster.

      However, the external world only needs to talk to dispatcher.

      The main ways to do this would be a Service or an Ingress both are explained https://kubernetes.io/docs/concepts/services-networking/ .

      I'd like to be able to scale N up and down based on demand.

      https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

      Should I be using a Deployment or a StatefulSet? or something else?

      Does your container have state? If so then use a stateful set. The Kubernetes page explains why to use a statefulset https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

      Do I actually need multiple deployments? Can containers from multiple deployments talk to each other?

      These things are not really related. You would use multiple deployments so you can configure multiple workloads differently.

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • Can I change a docker container from a self-delete policy to auto-restart?

      I've got a container started (not my code, not my container, but I often manage the server and deal with the aftermath of it being patched) and I need to make a hot-fix to the deployment strategy, I had gone through the steps to change it to auto-restart, but a coworker noticed that the container was started with --rm. My guess is that this is circumventing any attempt to start it up after reboots (because the container just isn't there any more)

      So is there a way to remove the --rm policy from a running container?

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • How to configure AWS Incident Manager to call from same number?

      I have an Android device with call screening and want to all my incident escalation phone calls to ring my phone, but so far all calls from AWS arrive from different numbers.

      Is it possible to request AWS to use the same number for its calls?

      I'm using AWS Incident Manager with a contact with a configured phone number.

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea
    • Rebooting node at end of Jenkins pipeline

      I have a pipeline that runs on a dedicated bare metal node for automated performance benchmarking. Using bare metal is necessary due to the nature of the project.

      Near the end of the pipeline, the results are sent to users (simply seeing a pass or fail status is not sufficient).

      To provide each instance of the pipeline a clean slate, the node is rebooted at the end of the pipeline. This makes the pipeline appear stuck in Jenkins. Several minutes after Jenkins is able to reconnect to the node, Jenkins realizes the pipeline that was running ended abruptly, and marks it failed:

      + sudo reboot
      Cannot contact baremetal: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@35ca6b21: baremetal": Remote call on baremetal failed. The channel is closing down or has closed down
      wrapper script does not seem to be touching the log file in /home/jenkins/workspace/bare-metal-benchmarks@tmp/durable-e96e9611
      (JENKINS-48300: if on an extremely laggy filesystem, consider -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=86400)
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      ERROR: script returned exit code -1
      Setting status of *** to FAILURE with url *** and message: ' '
      Using context: Bare Metal Benchmarks
      Finished: FAILURE
      

      The next instance of this pipeline can then start running on the rebooted bare metal node.

      Is there a way to safely reboot a node at the end of a pipeline so Jenkins does not mark the pipeline status as failed?

      The basic structure of the scripted pipeline is:

      node('baremetal') {
          checkout scm
      
      try { 
          stage('Benchmarks') {
              // run benchmarks
          }
      
          stage('Process data') {
              // process data and send results to users
          }
      }
      finally {
          stage('Clean up') {
              sh 'sudo reboot'
          }
      }
      

      }

      posted in Continuous Integration and Delivery (CI
      Analeea
      Analeea