Degraded peformance for random stream queues on identical consumer #415
Replies: 3 comments 8 replies
-
Thank you @christian-ehmig |
Beta Was this translation helpful? Give feedback.
-
Thanks! So far it seems to end up in "the same" direction - fast after restart and getting slower over time. I will pick up the latest main version and restart again. I am still not sure that this is related to the Golang client/consumer. As far as I remember we had similar issues with Java stream consumers. We'll try to provide isolated code samples. Our consumers are already modified - they just "consume" the message and discard it - there are no other things "going on". Furthermore we update a timestamp variable and a "read messages" counter. By comparing the timestamp, we know how far behind the given stream queue is. |
Beta Was this translation helpful? Give feedback.
-
We investigated the issue, and the problem could be related to the number of messages per chunk. For example:
As you can see, the performances drop with a lower number of messages. TestsIt would be helpful if you could do some tests with our perfTest
Populate a stream with a perTest full rate with only one producer:
After 1, consume without producers:
After 2, run five instance consumers ( like 1)
Mix publish full rate and limited rate: execute two performance instances:
The goal is to determine how these different scenarios impact the consumers' rates. NotesThe RabbitMQ stream achieves optimal performance when the chunks contain a lot of messages. The server sends chunks to the client, so bigger chunks mean better performance. The Send Clients API aims to strike a balance between latency and throughput. At a low rate, it could create many chunks. Golang has another API In your case, where latency is less important than throughput, you could use BatchSend to aggregate messages and flush them every X milliseconds to avoid creating too many chunks. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
This is a follow-up issue to the mailinglist conversation on:
https://groups.google.com/d/msgid/rabbitmq-users/9ae4c2aa-429f-46a6-9a2e-10b8f052bf31n%40googlegroups.com
We experience a major performance drop (streamed number of messages per second) among several identical Golang RMQ stream clients.
Example stream size: 2,299,304,706 messages
Retention: 2 days (= 172800s)
Message size 28 byte, disabled offset tracking
Client A and B are streaming "near real-time" while client C is behind several hours.
Read speed is often ok after a "restart" of the RMQ clients.
While client C is slow in this scenario, it is fast on another stream served by the same RMQ cluster with identical size.
Trying this patch and setting initial credits to 10k did not help 2d47c02
Initial speed was ok but dropped to a very low rate after some hours.
Reproduction steps
...
Expected behavior
Similar read performance among identical clients.
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions