Skip to content

Commit 3f51f48

Browse files
committed
Re-organize documentation levels
After extracting performance tool.
1 parent a40ac9f commit 3f51f48

File tree

8 files changed

+64
-62
lines changed

8 files changed

+64
-62
lines changed

src/docs/asciidoc/advanced-topics.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
:test-examples: ../../test/java/com/rabbitmq/stream/docs
22

3-
=== Advanced Topics
3+
== Advanced Topics
44

5-
==== Filtering
5+
=== Filtering
66

77
WARNING: Filtering requires *RabbitMQ 3.13* or more.
88

@@ -21,7 +21,7 @@ Because the server-side filtering is probabilistic: messages that do not match t
2121
The server uses a https://en.wikipedia.org/wiki/Bloom_filter[Bloom filter], _a space-efficient probabilistic data structure_, where false positives are possible.
2222
Despite this, the filtering saves some bandwidth, which is its primary goal.
2323

24-
===== Filtering on the Publishing Side
24+
==== Filtering on the Publishing Side
2525

2626
Filtering on the publishing side consists in defining some logic to extract the filter value from a message.
2727
The following snippet shows how to extract the filter value from an application property:
@@ -36,7 +36,7 @@ include::{test-examples}/FilteringUsage.java[tag=producer-simple]
3636
Note the filter value can be null: the message is then published in a regular way.
3737
It is called in this context an _unfiltered_ message.
3838

39-
===== Filtering on the Consuming Side
39+
==== Filtering on the Consuming Side
4040

4141
A consumer needs to set up one or several filter values and some filtering logic to enable filtering.
4242
The filtering logic must be consistent with the filter values.
@@ -74,7 +74,7 @@ include::{test-examples}/FilteringUsage.java[tag=consumer-match-unfiltered]
7474

7575
In the example above, the filtering logic has been adapted to let pass `california` messages _and_ messages without a state set as well.
7676

77-
===== Considerations on Filtering
77+
==== Considerations on Filtering
7878

7979
As stated previously, the server can send messages that do not match the filter value(s) set by consumers.
8080
This is why application developers must be very careful with the filtering logic they define to avoid processing unwanted messages.
@@ -86,7 +86,7 @@ A defined set of values shared across the messages is a good candidate: geograph
8686
Cardinality of filter values can be from a few to a few thousands.
8787
Extreme cardinality (a couple or dozens of thousands) can make filtering less efficient.
8888

89-
==== Using Native `epoll`
89+
=== Using Native `epoll`
9090

9191
The stream Java client uses the https://netty.io/[Netty] network framework and its Java NIO transport implementation by default.
9292
This should be a reasonable default for most applications.

src/docs/asciidoc/api.adoc

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
:test-examples: ../../test/java/com/rabbitmq/stream/docs
22

33
[[rabbitmq-stream-java-api]]
4-
=== RabbitMQ Stream Java API
4+
== RabbitMQ Stream Java API
55

6-
==== Overview
6+
=== Overview
77

88
This section describes the API to connect to the RabbitMQ Stream Plugin, publish messages, and
99
consume messages. There are 3 main interfaces:
@@ -13,9 +13,9 @@ managing streams.
1313
* `com.rabbitmq.stream.Producer` to publish messages.
1414
* `com.rabbitmq.stream.Consumer` to consume messages.
1515

16-
==== Environment
16+
=== Environment
1717

18-
===== Creating the Environment
18+
==== Creating the Environment
1919

2020
The environment is the main entry point to a node or a cluster of nodes. `Producer` and
2121
`Consumer` instances are created from an `Environment` instance. Here is the simplest
@@ -69,7 +69,7 @@ By specifying several URIs, the environment will try to connect to the first one
6969
will pick a new URI randomly in case of disconnection.
7070

7171
[[understanding-connection-logic]]
72-
===== Understanding Connection Logic
72+
==== Understanding Connection Logic
7373

7474
Creating the environment to connect to a cluster node works usually seamlessly.
7575
Creating publishers and consumers can cause problems as the client uses hints from the cluster to find the nodes where stream leaders and replicas are located to connect to the appropriate nodes.
@@ -82,7 +82,7 @@ This happens if the following conditions are met: the initial host to connect to
8282
Provide a pass-through `AddressResolver` to `EnvironmentBuilder#addressResolver(AddressResolver)` to avoid this behavior.
8383
It is unlikely this behavior applies for any real-world deployment, where `localhost` and/or the default `guest` user should not be used.
8484

85-
===== Enabling TLS
85+
==== Enabling TLS
8686

8787
TLS can be enabled by using the `rabbitmq-stream+tls` scheme in the URI.
8888
The default TLS port is 5551.
@@ -119,7 +119,7 @@ include::{test-examples}/EnvironmentUsage.java[tag=environment-creation-with-tls
119119
--------
120120
<1> Trust all server certificates
121121

122-
===== Configuring the Environment
122+
==== Configuring the Environment
123123

124124
The following table sums up the main settings to create an `Environment`:
125125

@@ -266,7 +266,7 @@ It is the developer's responsibility to close the `EventLoopGroup` they provide.
266266

267267
|===
268268

269-
===== When a Load Balancer is in Use
269+
==== When a Load Balancer is in Use
270270

271271
A load balancer can misguide the client when it tries to connect to nodes that host stream leaders and replicas.
272272
The https://blog.rabbitmq.com/posts/2021/07/connecting-to-streams/["Connecting to Streams"] blog post covers why client applications must connect to the appropriate nodes in a cluster and how a https://blog.rabbitmq.com/posts/2021/07/connecting-to-streams/#with-a-load-balancer[load balancer can make things complicated] for them.
@@ -285,7 +285,7 @@ include::{test-examples}/EnvironmentUsage.java[tag=address-resolver]
285285

286286
The blog post covers the https://blog.rabbitmq.com/posts/2021/07/connecting-to-streams/#client-workaround-with-a-load-balancer[underlying details of this workaround].
287287

288-
===== Managing Streams
288+
==== Managing Streams
289289

290290
Streams are usually long-lived, centrally-managed entities, that is, applications
291291
are not supposed to create and delete them. It is nevertheless possible to create and
@@ -372,9 +372,9 @@ include::{test-examples}/EnvironmentUsage.java[tag=stream-creation-time-based-re
372372
<1> Set the maximum age to 6 hours
373373
<2> Set the segment size to 500 MB
374374

375-
==== Producer
375+
=== Producer
376376

377-
===== Creating a Producer
377+
==== Creating a Producer
378378

379379
A `Producer` instance is created from the `Environment`. The only mandatory
380380
setting to specify is the stream to publish to:
@@ -461,7 +461,7 @@ Set the value to `Duration.ZERO` if there should be no timeout.
461461
|10 seconds.
462462
|===
463463

464-
===== Sending Messages
464+
==== Sending Messages
465465

466466
Once a `Producer` has been created, it is possible to send a message with
467467
the `Producer#send(Message, ConfirmationHandler)` method. The following
@@ -505,7 +505,7 @@ a separate thread (e.g. with an asynchronous `ExecutorService`).
505505
====
506506

507507
[[working-with-complex-messages]]
508-
===== Working with Complex Messages
508+
==== Working with Complex Messages
509509

510510
The publishing example above showed that messages are made of
511511
a byte array payload, but it did not go much further. Messages in RabbitMQ Stream
@@ -556,7 +556,7 @@ to be accessed as AMQP 0-9-1 queues, without data loss.
556556
====
557557

558558
[[outbound-message-deduplication]]
559-
===== Message Deduplication
559+
==== Message Deduplication
560560

561561
RabbitMQ Stream provides publisher confirms to avoid losing messages: once
562562
the broker has persisted a message it sends a confirmation for this message.
@@ -592,7 +592,7 @@ So you have to be very careful about the way your applications publish messages
592592
If you worry about performance, note it is possible to publish hundreds of thousands of messages in a single thread with RabbitMQ Stream.
593593
====
594594

595-
====== Setting the Name of a Producer
595+
===== Setting the Name of a Producer
596596

597597
The producer name is set when creating the producer instance, which automatically
598598
enables deduplication:
@@ -630,7 +630,7 @@ changes when the producer application is restarted. Names like `online-shop-orde
630630
There should be only one living instance of a producer with a given name on a given
631631
stream at the same time.
632632

633-
====== Understanding Publishing ID
633+
===== Understanding Publishing ID
634634

635635
The producer name is only one part of the deduplication mechanism, the other part
636636
is the _message publishing ID_. If the producer has a name, the client automatically
@@ -668,7 +668,7 @@ It is then possible to let the producer assign a publishing ID to each
668668
message or assign custom publishing IDs. *Do one or the other, not both!*
669669
====
670670

671-
====== Restarting a Producer Where It Left Off
671+
===== Restarting a Producer Where It Left Off
672672

673673
Using a custom publishing sequence is even more useful to restart a producer where it left
674674
off. Imagine a scenario whereby the producer is sending a message for each line in a file and
@@ -696,7 +696,7 @@ include::{test-examples}/ProducerUsage.java[tag=producer-queries-last-publishing
696696
<5> Set the message publishing
697697

698698
[[sub-entry-batching-and-compression]]
699-
===== Sub-Entry Batching and Compression
699+
==== Sub-Entry Batching and Compression
700700

701701
RabbitMQ Stream provides a special mode to publish, store, and dispatch messages: sub-entry batching.
702702
This mode increases throughput at the cost of increased latency and potential duplicated messages even when deduplication is enabled.
@@ -770,11 +770,11 @@ The broker dispatches messages to client libraries: they are supposed to figure
770770
So when you set up sub-entry batching and compression in your publishers, the consuming applications must use client libraries that support this mode, which is the case for the stream Java client.
771771
====
772772

773-
==== Consumer
773+
=== Consumer
774774

775775
`Consumer` is the API to consume messages from a stream.
776776

777-
===== Creating a Consumer
777+
==== Creating a Consumer
778778

779779
A `Consumer` instance is created with `Environment#consumerBuilder()`. The main
780780
settings are the stream to consume from, the place in the stream to start
@@ -885,7 +885,7 @@ types of offset specification.
885885
====
886886

887887
[[specifying-an-offset]]
888-
===== Specifying an Offset
888+
==== Specifying an Offset
889889

890890
The offset is the place in the stream where the consumer starts consuming from.
891891
The possible values for the offset parameter are the following:
@@ -939,7 +939,7 @@ This is this timestamp the broker uses to find the appropriate chunk to start fr
939939
The broker chooses the closest chunk _before_ the specified timestamp, that is why consumers may see messages published a bit before what they specified.
940940

941941
[[consumer-offset-tracking]]
942-
===== Tracking the Offset for a Consumer
942+
==== Tracking the Offset for a Consumer
943943

944944
RabbitMQ Stream provides server-side offset tracking.
945945
This means a consumer can track the offset it has reached in a stream.
@@ -959,7 +959,7 @@ offsets are stored depends on the tracking strategy: automatic or manual.
959959
Whatever tracking strategy you use, *a consumer must have a name to be able to store offsets*.
960960

961961
[[consumer-automatic-offset-tracking]]
962-
====== Automatic Offset Tracking
962+
===== Automatic Offset Tracking
963963

964964
The following snippet shows how to enable automatic tracking with the defaults:
965965

@@ -1008,7 +1008,7 @@ possible to have more fine-grained control over offset tracking by using
10081008
manual tracking.
10091009

10101010
[[consumer-manual-offset-tracking]]
1011-
====== Manual Offset Tracking
1011+
===== Manual Offset Tracking
10121012

10131013
The manual tracking strategy lets the developer in charge of storing offsets
10141014
whenever they want, not only after a given number of messages has been received
@@ -1047,7 +1047,7 @@ offset of the current message, but it is possible to store anywhere
10471047
in the stream with `MessageHandler.Context#consumer()#store(long)` or
10481048
simply `Consumer#store(long)`.
10491049

1050-
====== Considerations On Offset Tracking
1050+
===== Considerations On Offset Tracking
10511051

10521052
_When to store offsets?_ Avoid storing offsets too often or, worse, for each message.
10531053
Even though offset tracking is a small and fast operation, it will
@@ -1076,7 +1076,7 @@ a modulo to perform an operation every X messages. As the message offsets have
10761076
no guarantee to be contiguous, the operation may not happen exactly every X messages.
10771077

10781078
[[consumer-subscription-listener]]
1079-
====== Subscription Listener
1079+
===== Subscription Listener
10801080

10811081
The client provides a `SubscriptionListener` interface callback to add behavior before a subscription is created.
10821082
This callback can be used to customize the offset the client library computed for the subscription.
@@ -1118,7 +1118,7 @@ When a glitch happens and triggers the re-subscription, the server-side stored o
11181118
Using this server-side stored offset can lead to duplicates, whereas using the in-memory, application-specific offset tracking variable is more accurate.
11191119
A custom `SubscriptionListener` lets the application developer uses what's best for the application if the computed value is not optimal.
11201120

1121-
===== Flow Control
1121+
==== Flow Control
11221122

11231123
This section covers how a consumer can tell the broker when to send more messages.
11241124

@@ -1161,7 +1161,7 @@ Whether the method is idempotent depends on the flow strategy implementation.
11611161
Apart from the default one, the implementations the library provides does not make `processed()` idempotent.
11621162

11631163
[[single-active-consumer]]
1164-
===== Single Active Consumer
1164+
==== Single Active Consumer
11651165

11661166
WARNING: Single Active Consumer requires *RabbitMQ 3.11* or more.
11671167

@@ -1232,7 +1232,7 @@ We end up with 3 `app-1` consumers and 3 `app-2` consumers, 1 active consumer in
12321232

12331233
Let's see now the API for single active consumer.
12341234

1235-
====== Enabling Single Active Consumer
1235+
===== Enabling Single Active Consumer
12361236

12371237
Use the `ConsumerBuilder#singleActiveConsumer()` method to enable the feature:
12381238

@@ -1247,7 +1247,7 @@ include::{test-examples}/ConsumerUsage.java[tag=enabling-single-active-consumer]
12471247
With the configuration above, the consumer will take part in the `application-1` group on the `my-stream` stream.
12481248
If the consumer instance is the first in a group, it will get messages as soon as there are some available. If it is not the first in the group, it will remain idle until it is its turn to be active (likely when all the instances registered before it are gone).
12491249

1250-
====== Offset Tracking
1250+
===== Offset Tracking
12511251

12521252
Single active consumer and offset tracking work together: when the active consumer goes away, another consumer takes over and resumes when the former active left off.
12531253
Well, this is how things should work and luckily this is what happens when using <<consumer-offset-tracking, server-side offset tracking>>.
@@ -1257,7 +1257,7 @@ The story is different is you are using an external store for offset tracking.
12571257
In this case you need to tell the client library where to resume from and you can do this by implementing the `ConsumerUpdateListener` API.
12581258

12591259
[[consumer-update-listener]]
1260-
====== Reacting to Consumer State Change
1260+
===== Reacting to Consumer State Change
12611261

12621262
The broker notifies a consumer that becomes active before dispatching messages to it.
12631263
The broker expects a response from the consumer and this response contains the offset the dispatching should start from.

src/docs/asciidoc/building.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
=== Building the Client
1+
== Building the Client
22

33
You need JDK 1.8 or more installed.
44

src/docs/asciidoc/index.adoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,10 @@ the https://rabbitmq.com/stream.html[RabbitMQ Stream Plugin].
1111
It allows creating and deleting streams, as well as publishing to and consuming from
1212
these streams. Learn more in the <<overview.adoc#stream-client-overview,client overview>>.
1313

14-
include::overview.adoc[]
14+
https://github.com/rabbitmq/rabbitmq-stream-perf-test[Stream PerfTest] is a performance testing tool based on this client library.
1515

16-
== The Stream Java Client
1716

18-
The library requires Java 8 or later. Java 11 is recommended (CRC calculation uses methods available as of Java 9.)
17+
include::overview.adoc[]
1918

2019
include::setup.adoc[]
2120

src/docs/asciidoc/overview.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -97,3 +97,6 @@ The client contains 2 sets of programming interfaces whose stability are of inte
9797
* Application Programming Interfaces (API): those are the ones used to write application logic. They include the interfaces and classes in the `com.rabbitmq.stream` package (e.g. `Producer`, `Consumer`, `Message`). These API constitute the main programming model of the client and will be kept as stable as possible.
9898
* Service Provider Interfaces (SPI): those are interfaces to implement mainly technical behavior in the client. They are not meant to be used to implement application logic. Application developers may have to refer to them in the configuration phase and if they want to custom some internal behavior in the client. SPI include interfaces and classes in the `com.rabbitmq.stream.codec`, `com.rabbitmq.stream.compression`, `com.rabbitmq.stream.metrics` packages, among others. _These SPI are susceptible to change, but this should not impact the majority of applications_, as the changes would typically stay intern to the client.
9999

100+
== Pre-requisites
101+
102+
The library requires Java 8 or later. Java 11 is recommended (CRC calculation uses methods available as of Java 9.)

src/docs/asciidoc/sample-application.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
:test-examples: ../../test/java/com/rabbitmq/stream/docs
22

3-
=== Sample Application
3+
== Sample Application
44

55
This section covers the basics of the RabbitMQ Stream Java API by building
66
a small publish/consume application. This is a good way to get

0 commit comments

Comments
 (0)