Why pods are started for old ReplicaSet
-
A new deployment has been created and the release was successfully deployed on our AKS cluster.
We have noticed in logs that pods, for old ReplicaSet(which still exists on the cluster), are regularly executed. This is actually happening only for one specific ReplicaSet. The reason we have noticed it - it tries to perform a database update for an old db version.
Any idea why this may happen?
UPDATE: it turned out that we run "old" pod on a system test cluster (unfortunately connection string was set incorrectly
). The misleading thing was that ReplicaSet have the same name... because
Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[RANDOM-STRING]. The random string is randomly generated and uses the pod-template-hash as a seed.
-
I need to clear some stuff please :
1- Do you create a totally new Deployment ?
Ex:
The old deployment has name Deployment_1
The new deployment has name Deployment_2
2- If yes. Do you created a totally new deployment just because the old one is not updating the pods for the blocker that you mentioned above ?
3- If yes and you update the image name for that deployment. then you need set the replicas to zero using this command.
kubectl -n NAMESPACE scale deploy DEPLOYMENT_NAME --replicas=0
And make sure that the replicas is zero by this command.
kubectl -n NAMESPACE get deploy/DEPLOYMENT_NAME
Then scale it up again.
kubectl -n NAMESPACE scale deploy DEPLOYMENT_NAME --replicas=1
Note: You can set the count of replicas that you need I just write 1 for example.