When trying to use operator version 2.10.0 or 2.11.0 pods are continuously cycled #1786
-
We are currently using operator version 2.9.0 and rabbitmq 3.13.7. Our team is new to the operator project and have just started to work on understanding the upgrade process and steps to follow. However, with either version 2.10.0 or 2.11.0 of the operator for whatever reason the highest numbered pod in a cluster is continuously cycling. This is true with just 1 or more replica being defined. Right now I am trying to deploy a single new cluster with 2 replicas rabbitmq version 4.0.4 and this behavior remains consistent. If I disable the operator then both replicas start w/o issue. I don't see anything obvious in the operator logs and I am unable to get any logs from the pod that is cycling as it never stays up long enough. I'm really just hoping for some additional guidance on what for or where to look as this is all new to us. The most I can see are the following events where it looks like the setup-container starts and then is killed.
Kubernetes: v1.29.1 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Sigh, my question was silly and caused by me not connecting all the dots. There was another operator running in a different namespace that was stomping on my clusters. I imagine the mismatched versions caused the operators to continually modify the clusters putting them in an endless reboot cycle. Scoping all operators correctly resolved this issue. |
Beta Was this translation helpful? Give feedback.
Sigh, my question was silly and caused by me not connecting all the dots. There was another operator running in a different namespace that was stomping on my clusters. I imagine the mismatched versions caused the operators to continually modify the clusters putting them in an endless reboot cycle. Scoping all operators correctly resolved this issue.