Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. kalyn
    K
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    kalyn

    @kalyn

    1
    Reputation
    29678
    Posts
    3
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    kalyn Follow

    Best posts made by kalyn

    • RE: Is it possible to configure Jira/X-Ray so that the X-Ray tests can be used as sub-tasks?

      What Xray provides is the ability to create Sub Test Executions as sub-tasks of a Story, for example. More info here: https://confluence.xpand-it.com/display/XRAY/Sub-Test+Execution

      Since a Test is somehow like a test case template reusable, even for later version, maybe it makes more sense to have the "execution" related task as a sub-task instead. With Sub Test Executions you can track them directly in the Agile boards, as something right below the related Story, as depicted here: https://confluence.xpand-it.com/display/XRAY/Agile+Enhancements#AgileEnhancements-QuickviewofExecutionsforRequirementsfromAgileBoard

      posted in Automated Testing
      K
      kalyn

    Latest posts made by kalyn

    • Gathering timespan statistics from Git

      I would like to easily query git to answer a question like the following:

      How much time passed from when a developer made a git commit to when that commit was merged to a default branch from a feature branch (if that pattern is being used) or was tagged with a particular tag (for trunk-based development)?

      At a higher level, my goal is to see how much time it takes for code to leave a developer's fingers to actually reach production.

      Thanks!

      posted in Continuous Integration and Delivery (CI
      K
      kalyn
    • Can the status be running after applying the yaml file?

      This is an interview question, It asks apply the following yaml file into your k8s cluster:

      apiVersion: V1
      Kind: pod
      Metadata:
        Name: freebox
        Spec:
          Containers:
          - Name: busybox
            Image: busybox:latest
            Imagepullpolicy: IfNotPresent
      

      When running the kubectl get pod freebox, could the status be running? Why?

      posted in Continuous Integration and Delivery (CI
      K
      kalyn
    • RE: Virtual Machine Monitoring KQL yielding empty results

      I added Azure Monitor logs to the Performance counters and now it is showing the Perf data

      posted in Continuous Integration and Delivery (CI
      K
      kalyn
    • RE: How can I get everything to use the same load balancer on DigitalOcean?

      A service is a way to route traffic to an internal IP. By internal I mean inside Kubernetes. I mean you don't really point it to an IP, you point it to a deployment or w/e and Kubernetes takes care of the IP and routing traffic for you to the pods that make up that deployment.

      Now not all services work the same. ClusterIP will create an internal hostname and then route traffic to the Pods's IP. This make the service only reachable from other pods inside the cluster.

      Then you have NodePort, which opens a port on all worker nodes and routes the traffic to the deployment/pods or w/e. This exposes the service externally. So now it's reachable by a worker nodes IP and whatever port was selected by the service. You can also hard code a port but I wouldn't recommend it.

      But who wants to deal with IP's and port numbers? Thats lame. We want hostnames. you know like myapp.dev.coolcompany.com or w/e.

      That is where a Loadbalancer comes in. If you have a cluster running on a cloud provider then your cluster is likely already setup to use a LoadBalancer service. This service does two things. First it creates a nodePort service and then it configured a load balancer on your cloud provider to point traffic to all worker nodes and the NodePort for your service.

      When your LoadBalncer is created it likely came with a public DNS name to reach it by. Now you can just create a DNS record to point from myapp.dev.coolcompany.com -> my.lb.dns.cloudprovider.com. Now your app is reachable externally and without knowing about IP's and ports.

      Everything I just explained is explained even better and more accurately by the Kubernetes https://kubernetes.io/docs/concepts/services-networking/service/ . You should probably give it a read.

      So what's an Ingress? Well an Ingress is like creating a "LoadBalancer" but inside the cluster. We call this the Ingress Controler and there are a ton of options like Nginx, Apache and Traeffik to name a few. This likely does not come included by default on your cluster. You will need to create and configure an ingress controller before you can use the "ingress" API object.

      An ingress controller works like any other deployment running on your cluster. You will need to expose this application externally. Most often using a "LoadBalancer" Service. Confused yet? It will look like this: ExternalLB --> IngressLBService(NodePort) --> IngressController

      Once your Ingress Controller is configured you can then expose your deployments externally without using a "LoadBalancer" service. You can instead use a "ClusterIP" service + an "ingresss" object configured to point at the service. It will look like this: ExternalLB --> IngressLBService(NodePort) --> IngressController --> DeploymentService(ClusterIP) --> DeploymentPods

      As you can see the "LoadBalancer" service and IngressController work very similarly. So why do you need an IngressController?

      Well if your application stack is simple then you probably don't. Some of the cloud providers "LoadBalancers" are pretty feature rich. Like on AWS. But some of these Ingress Controllers offer some pretty nice features. Like path bases routing, integrating with Authentication providers, Automatic TLS certificate provisioning and tracing+montioring etc. Basically, if you need these features and your cloud provider's Loadbalancer doesn't have these things then an Ingress Controller is the best way to get them.

      Another tiny advantage is that an Ingress can help abstract away the cloud provider. Meaning less vendor lockin. If I have an application that is exposed externally via an Ingress controller then I can re-deploy that application on any cluster on any cloud provider as long as that cluster has an ingress controller.

      If I'm depending on a "LoadBalancer" service then its likely that service is configured with a bunch of Annotations and that is often not compatible with other clouds. Meaning I cant just redeploy that application to another cloud provider. This use case isn't very important to many people and it wouldn't be a big deal to fix a service to use a different cloud LB but it is something to consider.

      You can find a lot more in the Kubernetes https://kubernetes.io/docs/concepts/services-networking/ingress/ .

      posted in Continuous Integration and Delivery (CI
      K
      kalyn
    • RE: How does one perform systems testing against multiple interdependent machines?

      https://github.com/pytest-dev/pytest-testinfra lets you use the https://docs.pytest.org/ test framework to interact with and run tests against remote hosts.

      A simple test suite might look like:

      def test_mariadb_query(host):
          '''Connect to a remote host, run a SQL query using the `mariadb`
          CLI, record the result, and verify it matches our expectations.
          '''
      
      expected = "count\n15\n"
      res = host.run(
          '''mariadb example -e "select count from widgets where name='doodad'"'''
      )
      assert res.rc == 0
      assert res.stdout == expected
      

      The above would run the test_mariadb_query test on any hosts specified on the command line, e.g:

      pytest --hosts=ssh://root@node1
      

      You can specify a list of hosts in the test file instead of on the command line:

      import pytest
      import testinfra
      

      testinfra_hosts = ["ssh://root@node1"]

      def test_mariadb_query(host):
      expected = "count\n15\n"
      res = host.run(
      '''mariadb example -e "select count from widgets where name='doodad'"'''
      )
      assert res.rc == 0
      assert res.stdout == expected

      With this, you can just run pytest and it will do what you expect.

      If you have tests that interact with specific hosts, you can create pytest fixtures for those hosts:

      import pytest
      import testinfra
      

      @pytest.fixture
      def node1():
      return testinfra.get_host("ssh://root@node1")

      def test_kernel_version(host):
      res = host.run("uname -r")
      assert res.stdout.startswith("5.17")

      def test_mariadb_query(node1):
      expected = "count\n15\n"
      res = node1.run(
      '''mariadb example -e "select count from widgets where name='doodad'"'''
      )
      assert res.rc == 0
      assert res.stdout == expected

      The above will run the test_kernel_version test against all hosts specified on the command line, but test_mariadb_query will run only against host node1, regardless of the command line.

      The downside to this solution is that it's built on top of a unit testing framework; since tests are supposed to be independent, there's not a built-in mechanism to run them in a specific order. However, there are https://pytest-ordering.readthedocs.io/en/develop/ .

      If your ordering requirements can be written as setup/teardown tasks, https://docs.pytest.org/en/6.2.x/fixture.html may be a viable solution.


      You can use https://www.ansible.com itself to run your tests, and this gives you complete control over the order of execution of things, but you will generally "fail fast" rather than "fail last" (execution stops with the first failure, rather than continuing and showing you all test failures).

      posted in Continuous Integration and Delivery (CI
      K
      kalyn
    • RE: How to don't start entrypoint command on "docker-compose up"?

      You can scale service that you don't want to run to 0

      docker-compose up --scale phpunit=0 -d And it will not start container for phpunit service as stated in https://docs.docker.com/compose/reference/up/

      You can also check https://docs.docker.com/compose/profiles/ for more options on excluding certain services in your docker-compose file.

      posted in Continuous Integration and Delivery (CI
      K
      kalyn
    • RE: How to solve my error about saving the global configuration before it is loaded in Jenkins?

      After much investigation, I came across the fact that some plugins, specifically the Matrix Authorization Strategy Plugin and role-strategy-plugin were in a deadlock state where they both would not be checked under "compatible" to update, as they were dependent on each other being updated.

      Being fairly knew to Jenkins, and not wanting to break things, I only updated the compatible plugins, however; I missed this deadlock type issue.

      Updating both of these plugins fixed the issue.

      posted in Continuous Integration and Delivery (CI
      K
      kalyn
    • How come I need 0.7 electric mining drills to produce 18.75 iron plates a minute using a stone furnance?

      I was trying to ratio-out a small iron plate making factory. I used https://kirkmcdonald.github.io/calc.html#tab=graph&data=1-1-19&furnace=stone-furnace&items=iron-plate:r:75/4 which indicated that I need 0.7 electric mining drills in order to feed 1 stone furnace:

      calculator showing 0.7 drills needed

      This is equivalent to saying I need a mining drill running at 70 percent speed to feed 1 stone furnace (you can't reduce the speed of anything in Factorio though). A stock electric mining drill can harvest 0.5 items a second (or 1 item every 2 seconds, or 30 items a minute). A stone furnace can smelt a single iron ore in 3.2 seconds, thus 18.75 iron ore a minute. But, "0.7" electric mining drills means it should harvest 21 ore a minute, yet this calculator says 18.75. If I did my math right, I should only need .625 of an electric mining drill since 18.75/30 = 0.625 items a minute.

      Are belts being factored in somehow? The calculator also shows belts, but I am not understanding how they are being factored in exactly (if at all). The yellow belts can move 900 items a minute (15 a second).

      calculator showing the items/belts/factories

      Note: I am ignoring the coal portion of this. It shouldn't matter anyway for my question.

      Why do I need 0.7 electric mining drills instead of 0.625 to feed 1 stone furnace in order to make iron plates?

      posted in Game Testing
      K
      kalyn
    • What is this blue area for?

      I recently re-visited The Chasm to fight the Ruin Serpent, and for some reason I have a blue quest area marked on my map.

      Large blue semi-transparent circle

      I have no quests in my quest log, and I do not get any prompts when entering the area.

      What does this blue circle represent?

      posted in Game Testing
      K
      kalyn
    • How to make a message show for everyone when closest player is on the correct team

      I am making a custom map with my friend, and at one part, you have to get scanned. If you're on team green it should say "Scan completed," but if they're on team red, it shouldn't say anything.

      execute if entity @p[team=green] run say "Scan completed"
      

      I've tried this code so far but didn't get any result.

      What could be my issue here?

      posted in Game Testing
      K
      kalyn