@@ -22,6 +22,7 @@ Go client for [RabbitMQ Stream Queues](https://github.com/rabbitmq/rabbitmq-serv
22
22
* [ Load Balancer] ( #load-balancer )
23
23
* [ TLS] ( #tls )
24
24
* [ Streams] ( #streams )
25
+ * [Statistics](#streams-statistics)
25
26
* [Publish messages](#publish-messages)
26
27
* [`Send` vs `BatchSend`](#send-vs-batchsend)
27
28
* [Publish Confirmation](#publish-confirmation)
@@ -58,7 +59,7 @@ imports:
58
59
### Run server with Docker
59
60
---
60
61
You may need a server to test locally. Let's start the broker:
61
- ``` shell
62
+ ``` shell
62
63
docker run -it --rm --name rabbitmq -p 5552:5552 -p 15672:15672\
63
64
-e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=' -rabbitmq_stream advertised_host localhost -rabbit loopback_users "none"' \
64
65
rabbitmq:3.9-management
@@ -182,6 +183,35 @@ The function `DeclareStream` doesn't return errors if a stream is already define
182
183
Note that it returns the precondition failed when it doesn't have the same parameters
183
184
Use ` StreamExists ` to check if a stream exists.
184
185
186
+ ### Streams Statistics
187
+
188
+ To get stream statistics you need to use the the ` environment.StreamStats ` method.
189
+
190
+ ``` golang
191
+ stats , err := environment.StreamStats (testStreamName)
192
+
193
+ // FirstOffset - The first offset in the stream.
194
+ // return first offset in the stream /
195
+ // Error if there is no first offset yet
196
+
197
+ firstOffset , err := stats.FirstOffset () // first offset of the stream
198
+
199
+ // LastOffset - The last offset in the stream.
200
+ // return last offset in the stream
201
+ // error if there is no first offset yet
202
+ lastOffset , err := stats.LastOffset () // last offset of the stream
203
+
204
+ // CommittedChunkId - The ID (offset) of the committed chunk (block of messages) in the stream.
205
+ //
206
+ // It is the offset of the first message in the last chunk confirmed by a quorum of the stream
207
+ // cluster members (leader and replicas).
208
+ //
209
+ // The committed chunk ID is a good indication of what the last offset of a stream can be at a
210
+ // given time. The value can be stale as soon as the application reads it though, as the committed
211
+ // chunk ID for a stream that is published to changes all the time.
212
+
213
+ committedChunkId , err := statsAfter.CommittedChunkId ()
214
+ ```
185
215
186
216
### Publish messages
187
217
@@ -241,14 +271,14 @@ The `Send` interface works in most of the cases, In some condition is about 15/2
241
271
242
272
### Publish Confirmation
243
273
244
- For each publish the server sends back to the client the confirmation or an error.
274
+ For each publish the server sends back to the client the confirmation or an error.
245
275
The client provides an interface to receive the confirmation:
246
276
247
277
``` golang
248
278
// optional publish confirmation channel
249
279
chPublishConfirm := producer.NotifyPublishConfirmation ()
250
280
handlePublishConfirm (chPublishConfirm)
251
-
281
+
252
282
func handlePublishConfirm (confirms stream .ChannelPublishConfirm ) {
253
283
go func () {
254
284
for confirmed := range confirms {
@@ -264,7 +294,7 @@ func handlePublishConfirm(confirms stream.ChannelPublishConfirm) {
264
294
}
265
295
```
266
296
267
- In the MessageStatus struct you can find two ` publishingId ` :
297
+ In the MessageStatus struct you can find two ` publishingId ` :
268
298
``` golang
269
299
// first one
270
300
messageStatus.GetMessage ().GetPublishingId ()
@@ -277,12 +307,12 @@ The second one is assigned automatically by the client.
277
307
In case the user specifies the ` publishingId ` with:
278
308
``` golang
279
309
msg = amqp.NewMessage ([]byte (" mymessage" ))
280
- msg.SetPublishingId (18 ) // <---
310
+ msg.SetPublishingId (18 ) // <---
281
311
```
282
312
283
313
284
314
The filed: ` messageStatus.GetMessage().HasPublishingId() ` is true and </br >
285
- the values ` messageStatus.GetMessage().GetPublishingId() ` and ` messageStatus.GetPublishingId() ` are the same.
315
+ the values ` messageStatus.GetMessage().GetPublishingId() ` and ` messageStatus.GetPublishingId() ` are the same.
286
316
287
317
288
318
See also "Getting started" example in the [ examples] ( ./examples/ ) directory
@@ -303,8 +333,8 @@ publishingId, err := producer.GetLastPublishingId()
303
333
304
334
### Sub Entries Batching
305
335
306
- The number of messages to put in a sub-entry. A sub-entry is one "slot" in a publishing frame,
307
- meaning outbound messages are not only batched in publishing frames, but in sub-entries as well.
336
+ The number of messages to put in a sub-entry. A sub-entry is one "slot" in a publishing frame,
337
+ meaning outbound messages are not only batched in publishing frames, but in sub-entries as well.
308
338
Use this feature to increase throughput at the cost of increased latency. </br >
309
339
You can find a "Sub Entries Batching" example in the [ examples] ( ./examples/ ) directory. </br >
310
340
@@ -319,7 +349,7 @@ producer, err := env.NewProducer(streamName, stream.NewProducerOptions().
319
349
320
350
### Ha Producer Experimental
321
351
The ha producer is built up the standard producer. </br >
322
- Features:
352
+ Features:
323
353
- auto-reconnect in case of disconnection
324
354
- handle the unconfirmed messages automatically in case of fail.
325
355
@@ -329,7 +359,7 @@ You can find a "HA producer" example in the [examples](./examples/) directory. <
329
359
haproducer := NewHAProducer (
330
360
env *stream.Environment , // mandatory
331
361
streamName string , // mandatory
332
- producerOptions *stream.ProducerOptions , // optional
362
+ producerOptions *stream.ProducerOptions , // optional
333
363
confirmMessageHandler ConfirmMessageHandler // mandatory
334
364
)
335
365
```
@@ -352,7 +382,7 @@ With `ConsumerOptions` it is possible to customize the consumer behaviour.
352
382
``` golang
353
383
stream.NewConsumerOptions ().
354
384
SetConsumerName (" my_consumer" ). // set a consumer name
355
- SetCRCCheck (false ). // Enable/Disable the CRC control.
385
+ SetCRCCheck (false ). // Enable/Disable the CRC control.
356
386
SetOffset (stream.OffsetSpecification {}.First ())) // start consuming from the beginning
357
387
```
358
388
Disabling the CRC control can increase the performances.
@@ -374,7 +404,7 @@ handleMessages := func(consumerContext stream.ConsumerContext, message *amqp.Mes
374
404
consumer , err := env.NewConsumer (
375
405
..
376
406
stream.NewConsumerOptions ().
377
- SetConsumerName (" my_consumer" ). <- -----
407
+ SetConsumerName (" my_consumer" ). <- -----
378
408
` ` `
379
409
A consumer must have a name to be able to store offsets. <br>
380
410
Note: *AVOID to store the offset for each single message, it will reduce the performances*
@@ -388,9 +418,9 @@ processMessageAsync := func(consumer stream.Consumer, message *amqp.Message, off
388
418
err := consumer.StoreCustomOffset (offset) // commit all messages up to this offset
389
419
....
390
420
` ` `
391
- This is useful in situations where we have to process messages asynchronously and we cannot block the original message
421
+ This is useful in situations where we have to process messages asynchronously and we cannot block the original message
392
422
handler. Which means we cannot store the current or latest delivered offset as we saw in the ` handleMessages` function
393
- above.
423
+ above.
394
424
395
425
### Automatic Track Offset
396
426
@@ -422,9 +452,9 @@ stream.NewConsumerOptions().
422
452
// set a consumerOffsetNumber name
423
453
SetConsumerName (" my_consumer" ).
424
454
SetAutoCommit (stream.NewAutoCommitStrategy ().
425
- SetCountBeforeStorage (50 ). // store each 50 messages stores
455
+ SetCountBeforeStorage (50 ). // store each 50 messages stores
426
456
SetFlushInterval (10 *time.Second )). // store each 10 seconds
427
- SetOffset (stream.OffsetSpecification {}.First ()))
457
+ SetOffset (stream.OffsetSpecification {}.First ()))
428
458
` ` `
429
459
430
460
See also "Automatic Offset Tracking" example in the [examples](./examples/) directory
@@ -453,7 +483,7 @@ In this way it is possible to handle fail-over
453
483
454
484
### Performance test tool
455
485
456
- Performance test tool it is useful to execute tests.
486
+ Performance test tool it is useful to execute tests.
457
487
See also the [Java Performance](https://rabbitmq.github.io/rabbitmq-stream-java-client/stable/htmlsingle/#the-performance-tool) tool
458
488
459
489
0 commit comments