Skip to content

Conversation

@murfel
Copy link
Contributor

@murfel murfel commented Oct 16, 2025

No description provided.

@murfel
Copy link
Contributor Author

murfel commented Oct 16, 2025

It does perform unexpectedly badly, could you all take a look for any silly mistakes?

The simplest benchmark, sending X elements only (not receiving anything), is already pretty sad:

ChannelBenchmark.sendUnlimited                1000  avgt   10  0.008 ±  0.001  ms/op
ChannelBenchmark.sendUnlimited               10000  avgt   10  0.080 ±  0.001  ms/op
ChannelBenchmark.sendUnlimited              100000  avgt   10  0.800 ±  0.005  ms/op

Full output for the first three counts (4KB, 40KB, 400KB) of Ints (just FYI, no reason to look at this, since the snippet above is already bad enough)

Benchmark                                  (count)  Mode  Cnt  Score    Error  Units
ChannelBenchmark.manySendersManyReceivers     1000  avgt   10  0.119 ±  0.001  ms/op
ChannelBenchmark.manySendersManyReceivers    10000  avgt   10  0.900 ±  0.006  ms/op
ChannelBenchmark.manySendersManyReceivers   100000  avgt   10  9.244 ±  0.418  ms/op
ChannelBenchmark.manySendersOneReceiver       1000  avgt   10  0.108 ±  0.001  ms/op
ChannelBenchmark.manySendersOneReceiver      10000  avgt   10  0.713 ±  0.042  ms/op
ChannelBenchmark.manySendersOneReceiver     100000  avgt   10  7.010 ±  0.115  ms/op
ChannelBenchmark.oneSenderManyReceivers       1000  avgt   10  0.138 ±  0.001  ms/op
ChannelBenchmark.oneSenderManyReceivers      10000  avgt   10  0.923 ±  0.003  ms/op
ChannelBenchmark.oneSenderManyReceivers     100000  avgt   10  8.411 ±  0.035  ms/op
ChannelBenchmark.sendConflated                1000  avgt   10  0.020 ±  0.001  ms/op
ChannelBenchmark.sendConflated               10000  avgt   10  0.187 ±  0.007  ms/op
ChannelBenchmark.sendConflated              100000  avgt   10  1.834 ±  0.013  ms/op
ChannelBenchmark.sendReceiveConflated         1000  avgt   10  0.039 ±  0.001  ms/op
ChannelBenchmark.sendReceiveConflated        10000  avgt   10  0.236 ±  0.009  ms/op
ChannelBenchmark.sendReceiveConflated       100000  avgt   10  1.906 ±  0.019  ms/op
ChannelBenchmark.sendReceiveRendezvous        1000  avgt   10  0.103 ±  0.001  ms/op
ChannelBenchmark.sendReceiveRendezvous       10000  avgt   10  0.866 ±  0.021  ms/op
ChannelBenchmark.sendReceiveRendezvous      100000  avgt   10  8.270 ±  0.071  ms/op
ChannelBenchmark.sendReceiveUnlimited         1000  avgt   10  0.077 ±  0.002  ms/op
ChannelBenchmark.sendReceiveUnlimited        10000  avgt   10  0.419 ±  0.005  ms/op
ChannelBenchmark.sendReceiveUnlimited       100000  avgt   10  3.443 ±  0.061  ms/op
ChannelBenchmark.sendUnlimited                1000  avgt   10  0.008 ±  0.001  ms/op
ChannelBenchmark.sendUnlimited               10000  avgt   10  0.080 ±  0.001  ms/op
ChannelBenchmark.sendUnlimited              100000  avgt   10  0.800 ±  0.005  ms/op

@murfel murfel marked this pull request as draft October 16, 2025 15:59
@fzhinkin
Copy link
Contributor

Just for the record, we discussed benchmarks with @murfel offline and she'll rework them.

@murfel
Copy link
Contributor Author

murfel commented Nov 4, 2025

Ran (on freshly restarted macbook, without any apps open but the terminal and system monitor)
java -jar benchmarks.jar ".ChannelBenchmark." -p count=1000,100000000 -p prefill=0,1000000,100000000

# Run complete. Total time: 00:37:21

Benchmark                                    (count)  (prefill)  Mode  Cnt     Score      Error  Units
ChannelBenchmark.manySendersManyReceivers       1000          0  avgt   10     0.113 ±    0.001  ms/op
ChannelBenchmark.manySendersManyReceivers       1000    1000000  avgt   10     0.118 ±    0.007  ms/op
ChannelBenchmark.manySendersManyReceivers       1000  100000000  avgt   10     0.357 ±    0.055  ms/op
ChannelBenchmark.manySendersManyReceivers  100000000          0  avgt   10  8796.152 ±  543.292  ms/op
ChannelBenchmark.manySendersManyReceivers  100000000    1000000  avgt   10  8683.527 ±  254.436  ms/op
ChannelBenchmark.manySendersManyReceivers  100000000  100000000  avgt   10  9434.746 ±  310.576  ms/op
ChannelBenchmark.manySendersOneReceiver         1000          0  avgt   10     0.084 ±    0.002  ms/op
ChannelBenchmark.manySendersOneReceiver         1000    1000000  avgt   10     0.068 ±    0.001  ms/op
ChannelBenchmark.manySendersOneReceiver         1000  100000000  avgt   10     0.327 ±    0.043  ms/op
ChannelBenchmark.manySendersOneReceiver    100000000          0  avgt   10  6759.587 ± 1126.828  ms/op
ChannelBenchmark.manySendersOneReceiver    100000000    1000000  avgt   10  6730.408 ±  112.128  ms/op
ChannelBenchmark.manySendersOneReceiver    100000000  100000000  avgt   10  6222.171 ±  256.355  ms/op
ChannelBenchmark.oneSenderManyReceivers         1000          0  avgt   10     0.119 ±    0.003  ms/op
ChannelBenchmark.oneSenderManyReceivers         1000    1000000  avgt   10     0.121 ±    0.003  ms/op
ChannelBenchmark.oneSenderManyReceivers         1000  100000000  avgt   10     0.353 ±    0.065  ms/op
ChannelBenchmark.oneSenderManyReceivers    100000000          0  avgt   10  8785.786 ±  567.569  ms/op
ChannelBenchmark.oneSenderManyReceivers    100000000    1000000  avgt   10  8698.243 ±  517.566  ms/op
ChannelBenchmark.oneSenderManyReceivers    100000000  100000000  avgt   10  8594.145 ±  416.015  ms/op
ChannelBenchmark.sendConflated                  1000        N/A  avgt   10     0.017 ±    0.001  ms/op
ChannelBenchmark.sendConflated             100000000        N/A  avgt   10  1504.701 ±   27.829  ms/op
ChannelBenchmark.sendReceiveConflated           1000        N/A  avgt   10     0.037 ±    0.001  ms/op
ChannelBenchmark.sendReceiveConflated      100000000        N/A  avgt   10  1722.869 ±   85.603  ms/op
ChannelBenchmark.sendReceiveRendezvous          1000        N/A  avgt   10     0.122 ±    0.018  ms/op
ChannelBenchmark.sendReceiveRendezvous     100000000        N/A  avgt   10  7300.491 ±  107.318  ms/op
ChannelBenchmark.sendReceiveUnlimited           1000          0  avgt   10     0.057 ±    0.002  ms/op
ChannelBenchmark.sendReceiveUnlimited           1000    1000000  avgt   10     0.056 ±    0.003  ms/op
ChannelBenchmark.sendReceiveUnlimited           1000  100000000  avgt   10     0.314 ±    0.038  ms/op
ChannelBenchmark.sendReceiveUnlimited      100000000          0  avgt   10  3645.250 ±  658.235  ms/op
ChannelBenchmark.sendReceiveUnlimited      100000000    1000000  avgt   10  3192.487 ±  372.223  ms/op
ChannelBenchmark.sendReceiveUnlimited      100000000  100000000  avgt   10  3965.029 ±  386.913  ms/op
ChannelBenchmark.sendUnlimited                  1000        N/A  avgt   10     0.006 ±    0.001  ms/op
ChannelBenchmark.sendUnlimited             100000000        N/A  avgt   10  1157.710 ±  248.811  ms/op

@murfel murfel marked this pull request as ready for review November 4, 2025 13:24
@murfel murfel requested a review from dkhalanskyjb November 4, 2025 13:24
@murfel murfel changed the title [Draft] Add channel benchmarks Add channel benchmarks Nov 4, 2025
@murfel
Copy link
Contributor Author

murfel commented Nov 4, 2025

Quick normalisation with ChatGPT

Produce the same table but divide the Score column [and the Error column] by the count column and Change to ns/op/element (https://chatgpt.com/share/e/690a0281-e828-800b-8895-144ecc4e07f3)

(Will do a proper Notebook for a JSON benchmark output after we agree on the benchmark correctness. Forgot to save this one as JSON and it takes 40 min to re-run.)

# Run complete. Total time: 00:37:21

Benchmark                                    (count)  (prefill)  Mode  Cnt     Score          Error        Units
ChannelBenchmark.manySendersManyReceivers       1000          0  avgt   10     113.000 ±     1.000  ns/op/element
ChannelBenchmark.manySendersManyReceivers       1000    1000000  avgt   10     118.000 ±     7.000  ns/op/element
ChannelBenchmark.manySendersManyReceivers       1000  100000000  avgt   10     357.000 ±    55.000  ns/op/element
ChannelBenchmark.manySendersManyReceivers  100000000          0  avgt   10      87.962 ±     5.433  ns/op/element
ChannelBenchmark.manySendersManyReceivers  100000000    1000000  avgt   10      86.835 ±     2.544  ns/op/element
ChannelBenchmark.manySendersManyReceivers  100000000  100000000  avgt   10      94.347 ±     3.106  ns/op/element
ChannelBenchmark.manySendersOneReceiver         1000          0  avgt   10      84.000 ±     2.000  ns/op/element
ChannelBenchmark.manySendersOneReceiver         1000    1000000  avgt   10      68.000 ±     1.000  ns/op/element
ChannelBenchmark.manySendersOneReceiver         1000  100000000  avgt   10     327.000 ±    43.000  ns/op/element
ChannelBenchmark.manySendersOneReceiver    100000000          0  avgt   10      67.596 ±    11.268  ns/op/element
ChannelBenchmark.manySendersOneReceiver    100000000    1000000  avgt   10      67.304 ±     1.121  ns/op/element
ChannelBenchmark.manySendersOneReceiver    100000000  100000000  avgt   10      62.222 ±     2.564  ns/op/element
ChannelBenchmark.oneSenderManyReceivers         1000          0  avgt   10     119.000 ±     3.000  ns/op/element
ChannelBenchmark.oneSenderManyReceivers         1000    1000000  avgt   10     121.000 ±     3.000  ns/op/element
ChannelBenchmark.oneSenderManyReceivers         1000  100000000  avgt   10     353.000 ±    65.000  ns/op/element
ChannelBenchmark.oneSenderManyReceivers    100000000          0  avgt   10      87.858 ±     5.676  ns/op/element
ChannelBenchmark.oneSenderManyReceivers    100000000    1000000  avgt   10      86.982 ±     5.176  ns/op/element
ChannelBenchmark.oneSenderManyReceivers    100000000  100000000  avgt   10      85.941 ±     4.160  ns/op/element
ChannelBenchmark.sendConflated                  1000        N/A  avgt   10      17.000 ±     1.000  ns/op/element
ChannelBenchmark.sendConflated             100000000        N/A  avgt   10      15.047 ±     0.278  ns/op/element
ChannelBenchmark.sendReceiveConflated           1000        N/A  avgt   10      37.000 ±     1.000  ns/op/element
ChannelBenchmark.sendReceiveConflated      100000000        N/A  avgt   10      17.229 ±     0.856  ns/op/element
ChannelBenchmark.sendReceiveRendezvous          1000        N/A  avgt   10     122.000 ±    18.000  ns/op/element
ChannelBenchmark.sendReceiveRendezvous     100000000        N/A  avgt   10      73.005 ±     1.073  ns/op/element
ChannelBenchmark.sendReceiveUnlimited           1000          0  avgt   10      57.000 ±     2.000  ns/op/element
ChannelBenchmark.sendReceiveUnlimited           1000    1000000  avgt   10      56.000 ±     3.000  ns/op/element
ChannelBenchmark.sendReceiveUnlimited           1000  100000000  avgt   10     314.000 ±    38.000  ns/op/element
ChannelBenchmark.sendReceiveUnlimited      100000000          0  avgt   10      36.453 ±     6.582  ns/op/element
ChannelBenchmark.sendReceiveUnlimited      100000000    1000000  avgt   10      31.925 ±     3.722  ns/op/element
ChannelBenchmark.sendReceiveUnlimited      100000000  100000000  avgt   10      39.651 ±     3.869  ns/op/element
ChannelBenchmark.sendUnlimited                  1000        N/A  avgt   10       6.000 ±     1.000  ns/op/element
ChannelBenchmark.sendUnlimited             100000000        N/A  avgt   10      11.577 ±     2.488  ns/op/element

repeat(maxCount) { add(it) }
}

@Setup(Level.Invocation)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why it has to be done before every benchmark function invocation and not once per trial / iteration?

JFTR, https://github.com/openjdk/jmh/blob/2a316030b509aa9874dd6ab04e21962ac92cd634/jmh-core/src/main/java/org/openjdk/jmh/annotations/Level.java#L85

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a tradeoff. After each invocation, there could be a little extra items in the channel, which can accumulate with iterations. I can rewrite runSendReceive to leave channel with the same number of elements as it came in with, but then it will slightly affect the benchmark. Possibly negligible, since it's only up to 4 items each time...

@OutputTimeUnit(TimeUnit.NANOSECONDS)
@State(Scope.Benchmark)
@Fork(1)
open class ChannelBenchmark {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please elaborate what exactly you're trying to measure using these benchmarks?
Right now, it looks like "time required to create a new channel, send N messages into it (and, optionally, receive them), and then close the channel". However, I thought that initial idea was to measure the latency of sending (and receiving) a single message into the channel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

measure the latency of sending (and receiving) a single message into the channel

I do measure that, indirectly. Do you suggest to literally only send/receive one message per benchmark? Is that reliable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Direct measurements are always better than indirect. If the goal is to measure send/recv timing, let's measure it.

What makes you think it will be unreliable?

Copy link
Contributor Author

@murfel murfel Nov 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Direct measurements are always better than indirect.

Not always. See below. Also depends on how you define "better".

If the goal is to measure send/recv timing, let's measure it.

Again, I am measuring that. My way of measuring is a valid way of measuring. Having to assert that makes me feel dismissed.

We can explore other ways to measure, for sure.

What makes you think it will be unreliable?

  1. Overhead of the measuring setup could be greater than the effect measured.
  2. Yet simplified setup might not capture a typical usage.
  3. Does not average over data structure amortization (e.g. our sent element could be the element which triggers the channel's internal data structure doubling / allocation) (or, on the contrary, the constant from amortization could be noticeable and we do in fact want to measure it)
  4. Does not average over GC

What setup did you have in mind, something like this?

@Benchmark
fun sendReceiveUnlimitedPrefilledSequential(wrapper: UnlimitedChannelWrapper, blackhole: Blackhole) =
    runBlocking {
        wrapper.channel.send(42)
        blackhole.consume(wrapper.channel.receive())
    }
ChannelBenchmark.sendReceiveUnlimitedPrefilledSequential         0          0  avgt   10  53.959 ±  0.168  ns/op
ChannelBenchmark.sendReceiveUnlimitedPrefilledSequential         0    1000000  avgt   10  60.069 ±  1.345  ns/op
ChannelBenchmark.sendReceiveUnlimitedPrefilledSequential         0  100000000  avgt   10  71.457 ± 13.101  ns/op

Or this? (no suspension, trySend/tryReceive)

@Benchmark
fun sendReceiveUnlimitedPrefilledSequentialNoSuspension(wrapper: UnlimitedChannelWrapper, blackhole: Blackhole) {
    wrapper.channel.trySend(42)
    blackhole.consume(wrapper.channel.tryReceive().getOrThrow())
}
Benchmark                                                  (count)  (prefill)  Mode  Cnt   Score   Error  Units
ChannelBenchmark.sendReceiveUnlimitedPrefilledNoSuspension        0          0  avgt   10  10.619 ± 0.270  ns/op
ChannelBenchmark.sendReceiveUnlimitedPrefilledNoSuspension        0    1000000  avgt   10  10.859 ± 0.330  ns/op
ChannelBenchmark.sendReceiveUnlimitedPrefilledNoSuspension        0  100000000  avgt   10  17.163 ± 1.523  ns/op

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The internal structure of the channel may be worth taking into account. For a prefilled channel with 32+ elements (32 is the default channel segment size), we can expect send and receive not to interact with one another at all, that is, the duration of send followed by receive should be roughly the sum of durations of send and receive, invoked independently. I imagine wrapper.channel.send(42) and blackhole.consume(wrapper.channel.receive()) could stay in different benchmarks without affecting the results too much.

For an empty channel, we could also try racing send and receive.

Using runBlocking in a benchmark that's only doing send doesn't seem optimal to me, I can imagine the run time getting dominated by the runBlocking machinery. I don't know what the proper way of doing this in JMH is, but I'd try a scheme like this:

internal class BenchmarkSynchronization() {
    private val state = AtomicInteger(0)
    private val benchmarkThread = Thread.currentThread()
    private val threadDoingWork = AtomicReference<Thread?>()
    
    fun awaitThreadAssignment(): Thread {
        assert(Thread.currentThread() === benchmarkThread)
        while (true) {
            val thread = threadDoingWork.get()
            if (thread != null) return thread
            LockSupport.parkNanos(Long.MAX_VALUE)
        }
    }
    
    fun awaitStartSignal() {
        threadDoingWork.set(Thread.currentThread())
        LockSupport.unpark(benchmarkThread)
        while (state.get() == 0) {
            LockSupport.parkNanos(Long.MAX_VALUE)
        }
    }
    
    fun signalFinish() {
        state.set(2)
        LockSupport.unpark(benchmarkThread)
    }

    fun runBenchmark(thread: Thread) {
        state.set(1)
        LockSupport.unpark(thread)
        while (state.get() != 2) {
            LockSupport.parkNanos(Long.MAX_VALUE)
        }
    }
}

(haven't actually tested the code). Then, the scheme would be:

// preparation
val synchronization = BenchmarkSynchronization()
GlobalScope.launch {
    synchronization.awaitStartSignal()
    try {
       // actual benchmark code here
    } finally {
        synchronization.signalFinish()
    }
}
val threadDoingWork = synchronization.awaitThreadAssignment()

// the @Benchmark itself
wrapper.synchronization.runBenchmark(wrapper.threadDoingWork)

@fzhinkin , is there a standard mechanism that encapsulates this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For an empty channel, we could also try racing send and receive.

?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running them in parallel.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Direct measurements are always better than indirect.

Not always. See below. Also depends on how you define "better".

I mean, if our goal is to measure a latency of a certain operation and we have facilities to do so, then it's better to do it directly (to an extent, benchmark's results are averages anyway). By doing so, we can ensure that all unnecessary setup and teardown code (like, creating a channel) won't skew results.

On the other hand, if the goal is to measure end-to-end latency, like "time to create a channel and send 100k messages over it", then sure, the current approach works for that (moreover, I don't see how to measure it otherwise).

If the goal is to measure send/recv timing, let's measure it.

Again, I am measuring that. My way of measuring is a valid way of measuring. Having to assert that makes me feel dismissed.

See the comment, above. I was under the impression that the typical use case for channel is to be used indirectly (within a flow, for example), so for channels as they are we decided to measure a latency of a single operation to see how it will be affected by potential changes in the implementation.

I'm not saying that the way you're measuring it is invalid, but if there are facilities to measure latency of a single operation (well, the send-receive pair of operations), I'm voting for using it (unless there is an evidence that such a measurement is impossible or makes no sense).

We can explore other ways to measure, for sure.

What makes you think it will be unreliable?
Overhead of the measuring setup could be greater than the effect measured.

Setup (and teardown) actions performed before (after) the whole run (or an individual iteration) should not affect measurements (as they are performed outside of the measurement scope); it will affect the measurements when performed for each benchmark function invocation.

Yet simplified setup might not capture a typical usage.

I'm not sure if sending 400MB of data is a typical usage either. ;)

Does not average over data structure amortization (e.g. our sent element could be the element which triggers the channel's internal data structure doubling / allocation) (or, on the contrary, the constant from amortization could be noticeable and we do in fact want to measure it)

The benchmark function is continuously invoked over a configured period of time (you set it to 1 second).
If we reuse the same channel in each invocation, results will average over data structure amortization.

Does not average over GC

It's easier to focus on memory footprint as it is something we control directly (how many bytes we're allocating when performing an operation), rather than on GC pauses (they are a subject to various factors).

What setup did you have in mind, something like this?

Both approaches look sane (assuming the wrapper is not recreated for every benchmark call) and we can do both.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dkhalanskyjb, it feels like I didn't get you, but nevertheless: JMH provides some facilities to running benchmark methods concurrently and synchronize their execution:
https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_15_Asymmetric.java
https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_17_SyncIterations.java

As of runBlocking, it would be nice to have a kx-benchmarks maintainer here, who would solve a problem with benchmarking suspend-API for us. Oh, wait... 😄

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it feels like I didn't get you

Indeed, but that's because I had the wrong assumptions. I hadn't realized one important aspect:

Setup (and teardown) actions performed before (after) the whole run (or an individual iteration) should not affect measurements (as they are performed outside of the measurement scope); it will affect the measurements when performed for each benchmark function invocation.

My goal was to reduce the influence of runBlocking on the performance, and to that end, I wanted to spawn the actual computation in a separate thread beforehand. In the benchmark itself, instead of wasting time on scheduling a computation, initializing and deinitializing structured concurrency, etc., it would only send a signal "you can actually start now" and then wait for the computation to signal "okay, done".

This is all moot if the preparatory work in @Setup(Level.Invocation) objects is included in the time measurements. It's not viable to start as many threads as there are computations beforehand.

@murfel
Copy link
Contributor Author

murfel commented Dec 5, 2025

This subset takes 10 min on my machine, and looks representative enough.

java -jar benchmarks.jar "ChannelBenchmark.sendUnlimited|ChannelBenchmark.sendReceiveUnlimite|ChannelNanoBenchmark" -l
Benchmarks: 
benchmarks.ChannelBenchmark.sendReceiveUnlimited
benchmarks.ChannelBenchmark.sendUnlimited
benchmarks.ChannelNanoBenchmarkConflated.send
benchmarks.ChannelNanoBenchmarkConflated.sendReceive
benchmarks.ChannelNanoBenchmarkConflated.trySend
benchmarks.ChannelNanoBenchmarkConflated.trySendTryReceive
benchmarks.ChannelNanoBenchmarkUnlimited.receive
benchmarks.ChannelNanoBenchmarkUnlimited.send
benchmarks.ChannelNanoBenchmarkUnlimited.sendReceive
benchmarks.ChannelNanoBenchmarkUnlimited.tryReceive
benchmarks.ChannelNanoBenchmarkUnlimited.trySend
benchmarks.ChannelNanoBenchmarkUnlimited.trySendTryReceiv
Original
Benchmark                                         (count)  (prefill)  Mode  Cnt          Score          Error  Units
ChannelBenchmark.sendReceiveUnlimited                1000          0  avgt   10      58173.799 ±     1801.259  ns/op
ChannelBenchmark.sendReceiveUnlimited                1000     100000  avgt   10      58994.101 ±     1408.022  ns/op
ChannelBenchmark.sendReceiveUnlimited                1000    1000000  avgt   10      67313.679 ±     2623.367  ns/op
ChannelBenchmark.sendReceiveUnlimited                1000   10000000  avgt   10     118476.607 ±    12735.308  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000          0  avgt   10     408538.293 ±    14087.306  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000     100000  avgt   10     425637.480 ±     4138.182  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000    1000000  avgt   10     448546.443 ±     7344.801  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000   10000000  avgt   10     449431.781 ±    14419.163  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000          0  avgt   10    3189269.631 ±    33194.580  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000     100000  avgt   10    3365346.818 ±    37848.262  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000    1000000  avgt   10    3492582.200 ±    68148.662  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000   10000000  avgt   10    3500620.800 ±   139896.121  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000          0  avgt   10   39621541.131 ±  1198443.678  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000     100000  avgt   10   33158771.128 ±  1698526.750  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000    1000000  avgt   10   33887116.950 ±  1275570.958  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000   10000000  avgt   10   34388392.572 ±  1081615.086  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000          0  avgt   10  355662722.483 ± 35604196.110  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000     100000  avgt   10  351462454.917 ± 31221444.388  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000    1000000  avgt   10  340615236.067 ±  9833014.353  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000   10000000  avgt   10  357096319.367 ± 34967998.174  ns/op
ChannelBenchmark.sendUnlimited                       1000        N/A  avgt   10       5671.946 ±       94.137  ns/op
ChannelBenchmark.sendUnlimited                      10000        N/A  avgt   10      56429.835 ±     1045.117  ns/op
ChannelBenchmark.sendUnlimited                     100000        N/A  avgt   10     560987.230 ±     7645.543  ns/op
ChannelBenchmark.sendUnlimited                    1000000        N/A  avgt   10    5619028.205 ±    86664.728  ns/op
ChannelBenchmark.sendUnlimited                   10000000        N/A  avgt   10   57430254.262 ±  2299083.433  ns/op
ChannelNanoBenchmarkConflated.send                    N/A        N/A  avgt    5         62.518 ±        0.594  ns/op
ChannelNanoBenchmarkConflated.sendReceive             N/A        N/A  avgt    5         57.467 ±        0.652  ns/op
ChannelNanoBenchmarkConflated.trySend                 N/A        N/A  avgt    5         14.900 ±        0.048  ns/op
ChannelNanoBenchmarkConflated.trySendTryReceive       N/A        N/A  avgt    5         11.731 ±        0.053  ns/op
ChannelNanoBenchmarkUnlimited.receive                 N/A        N/A  avgt    5         61.233 ±       28.604  ns/op
ChannelNanoBenchmarkUnlimited.send                    N/A        N/A  avgt    5         59.000 ±       14.917  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A          0  avgt    5         55.976 ±        1.064  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A     100000  avgt    5         56.853 ±        1.647  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A    1000000  avgt    5         73.048 ±       35.201  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A   10000000  avgt    5         68.299 ±       28.626  ns/op
ChannelNanoBenchmarkUnlimited.trySend                 N/A        N/A  avgt    5         12.343 ±       11.598  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A          0  avgt    5         10.389 ±        0.126  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A     100000  avgt    5         10.568 ±        0.087  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A    1000000  avgt    5         10.578 ±        0.111  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A   10000000  avgt    5         16.978 ±        7.431  ns/op
Modified
Benchmark                                         (count)  (prefill)  Mode  Cnt          Score          Error  Units
ChannelBenchmark.sendReceiveUnlimited                1000          0  avgt   10      53321.984 ±      140.383  ns/op
ChannelBenchmark.sendReceiveUnlimited                1000     100000  avgt   10      59135.932 ±     1033.931  ns/op
ChannelBenchmark.sendReceiveUnlimited                1000    1000000  avgt   10      65285.626 ±     2510.405  ns/op
ChannelBenchmark.sendReceiveUnlimited                1000   10000000  avgt   10     121157.920 ±    18406.638  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000          0  avgt   10     403571.146 ±    11700.151  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000     100000  avgt   10     423541.000 ±     3819.413  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000    1000000  avgt   10     447198.871 ±     5746.136  ns/op
ChannelBenchmark.sendReceiveUnlimited               10000   10000000  avgt   10     460412.926 ±     9726.791  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000          0  avgt   10    3802053.739 ±    19996.781  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000     100000  avgt   10    3385744.030 ±    33426.061  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000    1000000  avgt   10    3430387.143 ±    77018.566  ns/op
ChannelBenchmark.sendReceiveUnlimited              100000   10000000  avgt   10    3612051.408 ±   193677.385  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000          0  avgt   10   34527962.340 ±  1786113.237  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000     100000  avgt   10   30000473.043 ±  1537384.064  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000    1000000  avgt   10   33298402.088 ±  1326632.381  ns/op
ChannelBenchmark.sendReceiveUnlimited             1000000   10000000  avgt   10   33925819.058 ±  1730904.507  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000          0  avgt   10  372512667.225 ± 48270167.700  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000     100000  avgt   10  306997513.942 ± 27951665.715  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000    1000000  avgt   10  339808357.517 ± 22977025.184  ns/op
ChannelBenchmark.sendReceiveUnlimited            10000000   10000000  avgt   10  345566241.700 ± 32634026.509  ns/op
ChannelBenchmark.sendUnlimited                       1000        N/A  avgt   10       5699.912 ±      153.549  ns/op
ChannelBenchmark.sendUnlimited                      10000        N/A  avgt   10      56235.767 ±      657.999  ns/op
ChannelBenchmark.sendUnlimited                     100000        N/A  avgt   10     561911.311 ±     7543.686  ns/op
ChannelBenchmark.sendUnlimited                    1000000        N/A  avgt   10    5617362.702 ±    76027.469  ns/op
ChannelBenchmark.sendUnlimited                   10000000        N/A  avgt   10   57116031.213 ±  1870531.763  ns/op
ChannelNanoBenchmarkConflated.send                    N/A        N/A  avgt    5         61.584 ±        0.624  ns/op
ChannelNanoBenchmarkConflated.sendReceive             N/A        N/A  avgt    5         56.510 ±        1.548  ns/op
ChannelNanoBenchmarkConflated.trySend                 N/A        N/A  avgt    5         15.560 ±        0.192  ns/op
ChannelNanoBenchmarkConflated.trySendTryReceive       N/A        N/A  avgt    5         12.253 ±        0.043  ns/op
ChannelNanoBenchmarkUnlimited.receive                 N/A        N/A  avgt    5         63.785 ±       18.984  ns/op
ChannelNanoBenchmarkUnlimited.send                    N/A        N/A  avgt    5         58.246 ±       10.985  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A          0  avgt    5         56.196 ±        1.948  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A     100000  avgt    5         58.019 ±        1.454  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A    1000000  avgt    5         75.528 ±       41.545  ns/op
ChannelNanoBenchmarkUnlimited.sendReceive             N/A   10000000  avgt    5         63.480 ±       23.588  ns/op
ChannelNanoBenchmarkUnlimited.trySend                 N/A        N/A  avgt    5         11.592 ±       10.266  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A          0  avgt    5         10.383 ±        0.081  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A     100000  avgt    5         10.780 ±        0.062  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A    1000000  avgt    5         10.568 ±        0.124  ns/op
ChannelNanoBenchmarkUnlimited.trySendTryReceive       N/A   10000000  avgt    5         17.378 ±        4.466  ns/op

@dkhalanskyjb
Copy link
Collaborator

dkhalanskyjb commented Dec 8, 2025

So… with the change, our channels are a bit more efficient?

@murfel
Copy link
Contributor Author

murfel commented Dec 9, 2025

Not really ready for review yet, I'm polishing bits up before making any statements. Sorry for posting here, I thought I had some reliable data and then got carried away.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants