Replies: 7 comments 9 replies
-
I will convert this issue to a GitHub discussion. Currently GitHub will automatically close and lock the issue even though your question will be transferred and responded to elsewhere. This is to let you know that we do not intend to ignore this but this is how the current GitHub conversion mechanism makes it seem for the users :( |
Beta Was this translation helpful? Give feedback.
-
Hi @bdaoudtdc 👋 Can you provide more details on how do you determine there's been message loss? How many messages are lost, on average? Do you "force" a Pod to terminate when it stays for some time in "Terminanting" state? Are you able to consistently reproduce this issue? Any metrics, specially from Grafana dashboards, would be very helpful to determine what's going on, and the YAML manifest you use to deploy your cluster would be helpful too. |
Beta Was this translation helpful? Give feedback.
-
Hello, We don't have grafana running. Here is the YAML file apiVersion: rabbitmq.com/v1beta1
override: |
Beta Was this translation helpful? Give feedback.
-
Can you tell me if this report comes from you or someone you work with https://groups.google.com/g/rabbitmq-users/c/bU9Ne0qgw3g/m/Dmit2o5yAwAJ? If not then we now have 2 reports of a similar issue and most likely the problem is unrelated to the Operator but potentially a bug in RabbitMQ itself. Given you are able to reproduce it almost every time, it'd help tremendously if you could provide an executable test case - simple publisher and consumer apps that we can run to trigger the problem and see those missing IDs. Any chance you could provide that? |
Beta Was this translation helpful? Give feedback.
-
@bdaoudtdc Can you test if terminating the consumer after the test returns the messages to the queue? I can think of a scenario where this may happen. If that is the case then messages aren't lost but rather "stuck" (i.e. the queue has assigned the messages to the consumer but the delivery hasn't completed and may not if the consumer's prefetch is maxed out or if there are no further messages received by the queue). |
Beta Was this translation helpful? Give feedback.
-
Does anyone else have this problem .. if not .. may you please share which client libraries you are using. |
Beta Was this translation helpful? Give feedback.
-
I’m away atm so can’t look at this issue but as we asked before if you can
provide a simple runnable application we can use with reproduction steps
we’ll be able to look into it further.
Cheers
Karl
On Sat, 4 Sep 2021 at 04:53, bdaoudtdc ***@***.***> wrote:
Does anyone else have this problem .. if not .. may you please share which
client libraries you are using.
Thanks,
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#814 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJAHFE4PTZ7KKN2I2WDSFDUAGC33ANCNFSM5CQULBZQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
--
*Karl Nilsson*
|
Beta Was this translation helpful? Give feedback.
-
Describe the bug
We have a cluster with 3 nodes.
In the event of nodes restarts because of changing a config value in the YAML file , we see messages loss.
We use quorum queues.
We are using publish confirm & consumer ack.
We believe the messages are lost between RMQ & the consumer
If the consumer is down , we don't see loss of messages.
Expected behavior
We don't except any loss of messages
Version and environment information
Beta Was this translation helpful? Give feedback.
All reactions