Navigation

    SOFTWARE-TESTING.COM

    • Register
    • Login
    • Search
    • Jobs
    • Tools
    • Companies
    • Conferences
    • Courses
    1. Home
    2. jeanid
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    jeanid

    @jeanid

    0
    Reputation
    29477
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    jeanid Follow

    Best posts made by jeanid

    This user hasn't posted anything yet.

    Latest posts made by jeanid

    • RE: calculating the size of objects in AWS S3 buckets

      You might have incomplete multipart upload in your S3 bucket which does not show up in the aws console.

      If the complete multipart upload request isn’t sent successfully, Amazon S3 will not assemble the parts and will not create any object. The parts remain in your Amazon S3 account until the multipart upload completes or is aborted, and you pay for the parts that are stored in Amazon S3. These parts are charged according to the storage class specified when the parts were uploaded.

      You can follow the guide https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/ to check if that's what taking up the storage in your S3 bucket.

      If you do not have a lifecycle policy to abort incomplete multipart upload, you should probably do so.

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • RE: Validating kubernetes manifest with --dry-run and generateName

      After a lot of playing around, I came to a working solution that I briefly mentioned in a comment in the original question. The CI is now creating a namespace on the cluster, running the dry run apply and then deleting the namespace when finished. Not sure if this is the perfect solution but it's working as I hoped.

      helm template . \
        --values common/values-common.yaml \
        --values variants/$VARIANT/values-$VARIANT.yaml \
        --name-template=github-actions-test \
        --set image.tag=github-actions-test \
        --namespace $NAMESPACE \
        --debug > dry-run.yaml
      

      kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
      echo "errors=$(kubectl create -f dry-run.yaml -n $NAMESPACE --dry-run=server -o yaml 2>&1 > /dev/null)" >> $GITHUB_OUTPUT
      kubectl delete namespace $NAMESPACE

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • Can I define a CodePipeline with Terraform that deploys my Terraform resources?

      I'm trying to figure out the best way to automate the deployment of infrastructure resources defined in Terraform. Ideally, I'd like to deploy all my code — including resource definitions — in a CI/CD manner.

      So, if I define an AWS CodePipeline that reads my Terraform code from GitHub, can I have that CodePipeline...deploy itself (+ any other AWS resources defined in the repo)?

      Update:

      I built this and tried to push a change through the pipeline that updated the CodeBuild image. It seems like the CodeBuild step to run Terraform succeeded in updating the pipeline, but then the "Build" stage that did this ended up in a "Cancelled" state with the message: Pipeline definition was updated.

      So it seems like it sort of worked, but I may have to kick off a manual release of the most recent change after that. Any downsides/risks?

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • RE: forward http request to pod running on worker node

      Kubelet would usually bind to whichever interface holds your default gateway. I suspect your SDN pods are failing to start up due to this. At that stage, I would check for netstat, ip routes and iptables rules.

      You may want to remove the internal network thing. Deploying Kubernetes, it is recommended for your nodes to run in a LAN, using loadbalancers to expose your ingresses.

      Otherwise, you need your default gateway to point to your internal network. And then, you'll need some kind of router, NAT-ing traffic while sending it back to your bridged network.

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • Set umask for an individual salt state

      Is there a way to set the umask for the pip state module or is there a more generic way to set the umask before specific state modules are run? In ansible the pip module takes a umask parameter. I do not see an equivalent option in the salt pip state module. We run salt-call with a umask of 007 but we need to install certain python packages with more open permissions; is there a way to do this?

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

      We have a cluster with 4 worker nodes and 1 master, and the flannel CNI installed. 1 kube-flannel-ds-xxxx pod running on every node.

      They used to run fine, but 1 node suddenly entered NotReady state and does not come out of it anymore.

      journalctl -u kubelet -f on the node constanly emits "cni plugin not initialized"

      Jul 25 14:44:05 ubdock09 kubelet[13076]: E0725 14:44:05.916280   13076 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
      

      Deleting the flannel pod makes a new one start up but the pluin keeps being uninitialized. What can we do or check to fix this?

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • RE: Transferred 0 file(s) while transferring war file from Jenkins server to remote server

      You can try to add an archive stage before this, maybe it isn't picking it because the war file isn't present when it needs to copy... you can also try using '*/target/**/jenkinswar.war' for your source file

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • Pass variables form current shell environment to the node app

      Please help me somebody with passing variables to node app. Given: node app on ubuntu package.json contains

      scripts {
          start: node index.js
      }
      

      The app is started with command npm run start

      Problem: the app requires a ton of environment variables the app runs on the server so i am not using .env file variables are exported in the shell so i can verify they are available in the shell before i do npm run start like so echo $MY_VAR works

      Currently in the sell that has all variables exported when i start the app i get from process.env.MY_VAR is undefined

      PS i’d always been using .env file before but now that i want it to run on the server i have no idea what magic is needed to pass variables form current shell environment to the node app.

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • RE: Where does k3s store it's "/var/lib/kubelet/config.yaml" file?

      /etc/rancher/k3s/config.yaml

      Note, /etc/rancher/k3s/config.yaml not /etc/rancher/k3s/k3s.yaml! You may have to create the file if it doesn't exist.

      https://github.com/k3s-io/k3s/issues/5213#issuecomment-1061135298 So it's https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#customized-flags-for-kubernetes-processes to k3s. But you can https://github.com/k3s-io/k3s/issues/5488#issuecomment-1106227357 ,

      kubelet-arg:
        - "kube-reserved=cpu=500m,memory=1Gi,ephemeral-storage=2Gi"
        - "system-reserved=cpu=500m, memory=1Gi,ephemeral-storage=2Gi"
        - "eviction-hard=memory.available

      Which would look like,

      kubelet-arg:
        - "eviction-hard=imagefs.available,memory.available,nodefs.available,nodefs.inodesFree"
      

      You can find more information at,

      • https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#customized-flags-for-kubernetes-processes
      • https://rancher.com/docs/k3s/latest/en/installation/install-options/#configuration-file
      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid
    • What does the ASCII art on the Amazon Linux 2 AMI MOTD mean?

      When logging in to an Amazon Linux EC2 instance, one is greeted by the following MOTD:

      Amazon Linux MOTD

      What does this ASCII art try to convey?

      posted in Continuous Integration and Delivery (CI
      jeanid
      jeanid