I have a QA Jenkins Job and I am trying to set Cronjob in QA Jenkins Job. So whenever the cronjob triggers the QA jenkins job, it need to take lastsuccessfulbuild artifact of Dev Jenkins Job. Is there any way we could achieve this?
Nykeriab
@Nykeriab
Best posts made by Nykeriab
-
How to achieve artifacts in Jenkins for QA job?
Latest posts made by Nykeriab
-
Trouble when creating Replica Set
I want to convert a Standalone to a Replica Set for my MongoDb instance. I'm using Terraform and Helm and I manage K8s with my provider (Scaleway). The Helm file I'm using is : https://artifacthub.io/packages/helm/bitnami/mongodb
I can create 2 replicas and one arbiter, but they shut down until a short moment, in my terminal they seems to be in a loop. I've got some error messages which are :
readiness probe fail.
I'm new at Devops.
Here is my Terraform config :
resource "helm_release" "mongodb" { depends_on = [kubernetes_secret.tls_mongo] name = "mongodb" repository = "https://charts.bitnami.com/bitnami" chart = "mongodb" version = "11.1.0" namespace = "mongo" set { name = "image.tag" value = "4.2" } set { name = "auth.rootUser" value = var.mongo_username } set { name = "auth.rootPassword" value = var.mongo_password } set { name = "persistence.size" value = "10Gi" } set { name = "architecture" value = "replicaset" # standalone or replicaset } set { name = "auth.replicaSetKey" value = var.MONGODB_REPLICA_SET_KEY } }
Here are the status of my pods : The first pict is when they 're running and everything is fine. The other pict of the pods status is when everything is driving me crazy.
Can someone help me?
@enceladus2022
Logs of the events after this command
kubectl describe pod mongodb-arbiter-0
-
RE: How to update Docker Swarm services all at once?
To ensure a stack is in a consistent state after an upgrade, see the documentation on https://docs.docker.com/compose/compose-file/compose-file-v3/#update_config .
failure_action
defaults topause
so you need to set it tocontinue
to ensure docker updates the services regardless.services: proxy: deploy: update_config: on_failure: continue backend: deploy: update_config: on_failure: continue
That said - docker does not support green/blue deployments - but it is possible to use network aliases to ensure proxy:2 does not route to backend:1. Something like this for e.g.:
services: proxy: image: proxy:${VER} environment: BACKEND_HOST: backend-${VER}
backend:
image: backend:${VER}
networks:
default:
aliases: ["backend-${VER}"]
To achieve your final requirement, you need to set the
update_config
parallelism
to 1, and ensure that theorder
is set tostart-first
.A naive deployment would have a race condition if "proxy:2" is available before "backend:2" is available, but as long as there is a healthcheck, "proxy:2" can prevent itself receiving traffic until it has a successful connection test to "backend-2"
services: proxy: image: proxy:${VER} environment: BACKEND_HOST: backend-${VER} healthcheck: ["CMD", "/bin/sh","-c","test_backend.sh"]
Of course, you can run into problems if "proxy:2" finishes deploying, but "backend:2" errors out - you could eventually have a situation where you have no proxy:1 running, only proxy:2, but only backend:1 running and no backend:2. So you want to ensure the
update_config
delays updating the next task until the current one is health - or pause if that never happens.PS. It seems potentially much less hassle to just design and code around the principal that backwards compatibility of +-1 version is expected.
-
RE: How to put production-like data into version control
it depends a bit what kind of cms you use, most cms offer a way to package data in to packages that you can check in to git and deploy to databases almost like code to webapps.
For example, we work with Sitecore and it lets you use TDS or Unicorn, other big CMS use different tools but mostly the same concept.
The tools let you pick items from your Database and serialise them in to code/text files that are packaged and check in to Git, lets say you have a copy of the database on your Development Workstation, you make some updates to the database and you serialise and package the change with TDS in to a package that you check in to your Development branch.
When you deploy the development branch to your Staging environment the TDS package is applied and makes the same updates to the database of your Staging environments.
-
How best to delay startup of a kubernetes container until another container has done something?
I'm migrating a chunk of applications to k8s. Some of them have large amounts of config files which are best held in GIT as their size exceeds the max size for configmaps. I have a simple git-sync image which I can configure to keep a persistent volume in sync with a git repository and had hoped to use it as a sidecar in some deployments.
Here's the crux. Some applications (like vendor apps that I can't control) require the configuration files to be there before the application starts. This means I can't just run the git-sync container as a sidecar as there's no guarantee it will have cloned the git repo before the main app starts. I've worked around this by having a separate deployment for the git sync and then having an initContainer for my main application which checks for the existence of the cloned git repo before starting.
This works but it feels a little messy. Any thoughts on a cleaner approach to this?
Here's a yaml snippet of my deployments:
#main-deployment ... initContainers: - name: wait-for-git-sync image: my-git-sync:1.0 command: ["/bin/bash"] args: [ "-c", "until [ -d /myapp-config/stuff ] ; do echo \"config not present yet\"; sleep 1; done; exit;" ] volumeMounts: - mountPath: /myapp-config name: myapp-config containers: - name: myapp image: myapp:1.0 volumeMounts: - mountPath: /myapp-config name: myapp-config
volumes:
- name: myapp-config
persistentVolumeClaim:
claimName: myapp-config
...
#git-sync-deployment
...
containers:- name: myapp-git-sync
image: my-git-sync:1.0
env:- name: GIT_REPO
value: ssh://mygitrepo - name: SYNC_DIR
value: /myapp-config/stuff
volumeMounts: - mountPath: /myapp-config
name: myapp-config
volumes:
- name: GIT_REPO
- name: myapp-config
persistentVolumeClaim:
claimName: myapp-config
...
- name: myapp-config
-
Kubernetes deployment with multiple containers
I have two containers,
worker
anddispatcher
. I want to be able to deploy N copies ofworker
and 1 copy ofdispatcher
to the cluster. Is there a deployment that will make this work?dispatcher
needs to be able to know about and talk to all of theworker
instances (worker
is running a web server). However, the external world only needs to talk todispatcher
.As a stretch goal, I'd like to be able to scale N up and down based on demand.
Questions:
- Should I be using a Deployment or a StatefulSet? or something else?
- Do I actually need multiple deployments? Can containers from multiple deployments talk to each other?
- How do I dynamically scale this?
It seems like I can get partway there with a single Deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: myapp spec: replicas: 3 selector: matchLabels: deploy: myapp template: metadata: labels: deploy: myapp spec: containers: - name: worker image: localhost/worker:latest
I expect this will give me 3 workers, but 0 dispatchers. However, if I add
dispatcher
to thecontainers
list, I would expect to get 3 of those too, which is not what I want. -
Setup Folder When Setting up Kubernetes Storage
I am trying to setup automation around my Kubernetes storage and hitting some problems. I thought I would ask if there is a solution for this in the community.
The two Kubernetes storage options I am seeing each have a limitation:
Dynamic Storage: You can't control the name of the Persistent Volume nor the directory that it creates on disk (making it hard to connect to again if needed).
Static Storage: You have to manually make the folder structure that the Persistent Volume expects.
Both of these can be overcome with more work. But I find it hard to believe that I am the first person with this issue, so I thought I would ask:
Is there a way using dynamic storage (aka Storage Classes) to choose the Persistent Volume name and folder structure that is created (so it can be re-connected to)?
OR
Is there a way to have a manually created Persistent Volume create the needed folder structure given in the yaml? (This is perferred.)
-
RE: K8s cluster not deploying deployments across all the nodes
If you want to place a pod on each node you can use DaemonSet. DaemonSets are configured to distribute the application to each node. For detailed information;
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
You can find out if DaemonSet is what you need by looking specifically at this title;
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#deployments
-
RE: Does GitLab support assigning a reviewer based on the contributor?
I don't believe there is any native functionality to help assist in this, and that this relationship will have to be modeled external to GitLab. I've opened this issue for clarity,
https://gitlab.com/gitlab-org/gitlab/-/issues/365887
-
How are containers secured with MACVLAN networks?
Containers cannot be connected both to an internal bridge and to the host network at the same time, according to https://devops.stackexchange.com/a/9884/35435 . However, this is what I need; I want a set of containers to be connected both to a Swarm overlay network for inter-container communication (or a different solution if necessary, like Weave), and also to the host's network, without NAT, to expose a service. Thus, for the container to expose its service, the only option I am aware of is to use a macvlan network.
Side note: using a Swarm service without the routing mesh (there will only be one container/service per host, and requests to that host should always be served by the container/service on that host) would work, except my understanding is that a published port will go through NAT. This is undesireable.
It appears that a container exposed through a macvlan network is completely exposed and cannot be firewalled off (I read that the macvlan network is independent of the host's network stack, somehow, and can't be firewalled by the host - if you have a source for this, please link it). It would be possible to limit the attack surface by using the smallest possible image, and be very careful about setting bind addresses for services on the container, but this doesn't replace a firewall.
So, assuming the preceding three paragraphs aren't drastically incorrect, what is the recommended way to securely expose a container on a macvlan network?
-
RE: When and who should set a feature to "done" in Azure DevOps?
Answers will be somewhat opinion-based, but where I work it's typically the responsibility of the person who raised the feature to close it down. Only they can decide if the development team has met all of the acceptance criteria.
QA may say "we've tested it, we're happy to ship it", but there may be some child items under the feature which are follow-up activities (documentation, customer comms, website updates, support might want to do deployment tests into a cloud platform...) which need to be completed before the feature is "done".