Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. jonetta
    J
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    jonetta

    @jonetta

    0
    Reputation
    30001
    Posts
    3
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    jonetta Follow

    Best posts made by jonetta

    This user hasn't posted anything yet.

    Latest posts made by jonetta

    • port forward ssh from traefik reverse proxy on docker to a k3s container

      I'm running a k3s cluster (1 master + 2 workers) and a docker traefik container on the same host (master). The traefik docker container is actually doing the reverse proxy stuff for tls which is working already on ports 80 and 443 for my different subdomains. I'm trying to get ssh working (for only one subdomain) too but without success so far.

      • port 22 is open through ufw allow (on Ubuntu 22.04)
      • traefik rules are set as following:
          tcp:
            routers:
              giti-ssh:
                entrypoints:
                  - "https" # tried also with a ssh entryoint
                rule: "HostSNI(`*`)"
                tls: {}
                service: giti-ssh
            services:
              giti-ssh:
                loadBalancer:
                  servers:
                    - address: "10.42.0.232:22"
      
      • k3s is running flannel and metallb where the externalIP-range is at 10.42.0.230-250
      • ip a shows (the interesting parts):
      2: ens192:  mtu 1500 qdisc mq state UP group default qlen 1000
          link/ether 00:50:56:19:ea:c3 brd ff:ff:ff:ff:ff:ff
          altname enp11s0
          inet "private"/32 metric 100 scope global dynamic ens192
             valid_lft 36147sec preferred_lft 36147sec
          inet 10.42.0.200/32 scope global ens192
             valid_lft forever preferred_lft forever
          inet6 "private"/64 scope link
             valid_lft forever preferred_lft forever
      3: br-5014eb2ffdf2:  mtu 1500 qdisc noqueue state UP group default
          link/ether 02:42:7e:ab:72:98 brd ff:ff:ff:ff:ff:ff
          inet 172.18.0.1/16 brd 172.18.255.255 scope global br-5014eb2ffdf2
             valid_lft forever preferred_lft forever
          inet6 fe80::42:7eff:feab:7298/64 scope link
             valid_lft forever preferred_lft forever
      4: docker0:  mtu 1500 qdisc noqueue state DOWN group default
          link/ether 02:42:a5:03:77:2c brd ff:ff:ff:ff:ff:ff
          inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
             valid_lft forever preferred_lft forever
      7: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default
          link/ether 42:1b:d3:49:d3:6b brd ff:ff:ff:ff:ff:ff
          inet 10.42.0.0/32 scope global flannel.1
             valid_lft forever preferred_lft forever
          inet6 fe80::401b:d3ff:fe49:d36b/64 scope link
             valid_lft forever preferred_lft forever
      8: cni0:  mtu 1450 qdisc noqueue state UP group default qlen 1000
          link/ether e2:27:27:96:96:7e brd ff:ff:ff:ff:ff:ff
          inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
             valid_lft forever preferred_lft forever
          inet6 fe80::e027:27ff:fe96:967e/64 scope link
             valid_lft forever preferred_lft forever
      
      • the containers are set up and the service for the one for ssh is listening on port 22 as type: LoadBalancer
      • I can connect to that container through another service and IP on port 443 from the traefik reverse proxy but am missing something for port 22 and I think it has something to do with the traefik HostSNI or maybe the iptables....
      • versions: docker traefic: latest (just for testing, am going for a tagged version..) k3s: v1.24.6+k3s1

      I can't connect also through 1932/udp (minecraft), so I suppose running traefik on s.th. else then http(s) is harder....

      Can s.o. give me a hint on how to achieve this.

      Thanks in advance! jim

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • RE: Deploying environment secrets to services

      You can achieve this in multiple ways.

      You can set your credentials as secrets if you use a CI/CD system such as Jenkins or Gitlab pipelines. Then reference them within your pipeline script as variables and inject the credentials into your application during the build. This will not expose the credentials in the CI/CD system logs. Please refer to the following articles for more information:

      https://docs.gitlab.com/ee/ci/variables/

      https://www.jenkins.io/doc/book/using/using-credentials/

      https://devops.com/how-to-securely-manage-secrets-within-jenkins/

      Depending on your application security requirements, you can set up an identity-based secret and encryption management system such as Hashicorp Vault. Vault provides encryption services that are gated by authentication and authorization methods. You can create a role for your CI/CD system within the Vault and get the secrets during the build to auto-generate a .env file. Learning the vault concepts may take a decent amount of time.

      https://docs.gitlab.com/ee/ci/secrets/

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • Minimum laptop requirements for devops learning

      I started learning devops technologies, but I encountered a problem with my laptop on the first labs, I cant provision more than 3 VMs in Vagrant, and I found out that my processor is not sufficient, so I cant build labs of 6 or more servers ...

      Is there a minimum requirement if I have to buy a new laptop ? Am I going to need less/more requirements when I'm going to jump to containers or newer technologies ?

      My actual configuration : CPU : i7-7500U (4xCores 2.7Ghz) RAM : 8Go

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • How do you securely deploy large number of Kubernetes components in isolation?

      Precursor: I am not experienced in the design of large-scale infrastructure deployments for infra applications.

      The assumptions for the questions:

      • I have read that it is a good practice to host Kubernetes components in isolation from each other on the network as the network provides a layer of security control.

      • In a large K8S deployment environment, you may have multiple instances of Kubernetes deployment. Each Kubernetes deployment has components including etcd, kube API server, scheduler, controller manager, etc.

      If we consider both points above, then the question are:

      Q1a) How do you scale the Kubernetes administration/control plane? How do you scale from 1 etcd server to 10 etcd servers for example?

      Q1b) In a large organization where there are different business units, do you deploy one K8S instance (active/passive) for each business unit, or multiple K8S instances serving the entire organization?

      Q2) For each deployment method described in part (1b), how do you reconcile multiple instances of Kubernetes to get a master view in order to monitor all the instances of containers running on Kubernetes?

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • Jenkins JDK17 Docker still using JDK9?

      I installed docker for Jenkins JDK17

      docker pull jenkins/jenkins:jdk17
      

      Reason being I run a single node (I know isn't the best practice, but for trying out) and my target application is also a Java build.

      Now, the issue is when I checked the version in a build, it is still Java 9 Not 17! Results in build failures.

      + java -version
      java version "9.0.4"
      Java(TM) SE Runtime Environment (build 9.0.4+11)
      Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
      

      How can I get JDK 17 so that I can build my application using Jenkins Docker?

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • TeamCity run step in docker

      I'm trying to set up TeamCity to run tests on a .NET project. I have installed TeamCity and its agent (with access to docker) using docker compose :

        teamcity:
          image: jetbrains/teamcity-server
          container_name: teamcity
          volumes:
           - /home/arsene/teamcity/data:/data/teamcity_server/datadir
           - /home/arsene/teamcity/logs:/opt/teamcity/logs
          environment:
            - TEAMCITY_HTTPS_PROXY_ENABLED=true
          labels:
            - traefik.http.routers.teamcity.rule=Host(`myhost`)
            - traefik.http.routers.teamcity.tls=true
            - traefik.http.routers.teamcity.tls.certresolver=le   
            - traefik.http.services.teamcity.loadbalancer.server.port=8111
      

      teamcityagent:
      image: jetbrains/teamcity-agent
      container_name: teamcityagent
      volumes:
      - /home/arsene/teamcity/agent:/data/teamcity_agent/conf
      - /var/run/docker.sock:/var/run/docker.sock
      - /opt/buildagent/work:/opt/buildagent/work
      - /opt/buildagent/temp:/opt/buildagent/temp
      environment:
      - AGENT_NAME=TeamCityRunner
      - SERVER_URL=myhost
      - DOCKER_IN_DOCKER=start
      privileged: true

      When I configure my project I select the following config :

      TeamCity Config

      TeamCity Config

      But my build fail with very few detail about whats wrong.

      Step 1/2: Test (.NET)
        Running step within Docker container mcr.microsoft.com/dotnet/sdk:5.0
        dotnet test
          Starting: .NET SDK 5.0.408 /usr/bin/dotnet test /opt/buildagent/work/e83eb8da5bf3868c/ImPresent.Tests/ImPresent.Tests.csproj @/opt/buildagent/temp/agentTmp/1.rsp
          in directory: /opt/buildagent/work/e83eb8da5bf3868c
          MSBUILD : error MSB1021: Cannot create an instance of the logger. The given assembly name or codebase was invalid. (0x80131047)
          Switch: TeamCity.MSBuild.Logger.TeamCityMSBuildLogger,/opt/buildagent/plugins/dotnet/tools/msbuild15/TeamCity.MSBuild.Logger.dll;TeamCity;plain
          Process exited with code 1
          Process exited with code 1 (Step: Test (.NET))
        Step Test (.NET) failed
      

      Do you have any idea of what could be wrong, or how to check logs the have more details ? I ran my tests in the same container on my desktop, they pass.

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • RE: How to configure AWS Incident Manager to call from same number?

      With help from a https://www.reddit.com/r/aws/comments/vykc06/comment/ig34pxe/?utm_source=share&utm_medium=web2x&context=3 , there is an Amazon virtual contact card available that contains all the phone numbers Amazon uses to make these calls. Adding this contact to my phone allows these pages to skip my call screener.

      https://docs.aws.amazon.com/incident-manager/latest/userguide/contacts.html#contacts-details-file

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • RE: AWS- How to estimate a server configuration for nginx load balancer?

      There are 2 ways I can think of to solve this problem:

      • You can use your own data points from your own experience or others: One datapoint I can provide is ~70,000 sessions per month, probably 50 connections per page request, 1MB page size and maybe 1.5-2 pages per session was running on Nginx 1.14 on a t3.medium. I don't recall ever getting charged for CPU credits, so the CPU usage can't have been ever above 20% over 24h. I do recall upgrading from t3.small at some point, so it must have been memory usage. ~200 vhosts and their TLS certificates.

      • You can use Route53 weighted routing and distribute the traffic to say a c6gn.xlarge (to be super save) and a t4g.medium and then slowly dial up the percentages and watch CPU, memory, network and disk throughput until you notice bottlenecks.

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • Does GitLab support assigning a reviewer based on the contributor?

      My company has the notion of a Senior Tier of developers. These developers are dispersed amongst development teams. Then we assign the Senior Developers from one team to assess the contribution from a different team they were assigned.

      Let's say I have a project foo and for foo, I want

      • SeniorBarGroup to review PRs to BazGroup, and
      • SeniorBazGroup to review PRs to QuzGroup

      Is this workflow possible in GitLab?

      posted in Continuous Integration and Delivery (CI
      J
      jonetta
    • RE: Is there a way to get message in Teams and Azure DevOps when someone mentions me in a comment in a Work Item or Pull Request?

      Yes, you can do this. Azure DevOps you can use webhooks directly to teams, just lookaround for implementation.

      GitHub, you can use WebHook and send the message to an ETL, like LogicApp that will send the message to teams, or you can make your own middleware, there are some options to comunicate with teams ( https://docs.microsoft.com/en-us/microsoftteams/platform/sbs-gs-csharp )

      posted in Continuous Integration and Delivery (CI
      J
      jonetta