diff --git a/tests/results/dp-perf/2.3.0/2.3.0-oss.md b/tests/results/dp-perf/2.3.0/2.3.0-oss.md new file mode 100644 index 0000000000..9cae3303a0 --- /dev/null +++ b/tests/results/dp-perf/2.3.0/2.3.0-oss.md @@ -0,0 +1,90 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Latency continues to grow slightly, per the trend of past releases. + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 999.99 +Duration [total, attack, wait] 30s, 29.999s, 991.978µs +Latencies [min, mean, 50, 90, 95, 99, max] 816.445µs, 1.069ms, 1.045ms, 1.166ms, 1.217ms, 1.385ms, 23.061ms +Bytes In [total, mean] 4740000, 158.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 1.132ms +Latencies [min, mean, 50, 90, 95, 99, max] 840.624µs, 1.096ms, 1.073ms, 1.204ms, 1.26ms, 1.44ms, 16.79ms +Bytes In [total, mean] 4770000, 159.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 1.067ms +Latencies [min, mean, 50, 90, 95, 99, max] 825.3µs, 1.095ms, 1.071ms, 1.201ms, 1.256ms, 1.444ms, 16.845ms +Bytes In [total, mean] 5010000, 167.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.02, 999.99 +Duration [total, attack, wait] 30s, 29.999s, 954.141µs +Latencies [min, mean, 50, 90, 95, 99, max] 818.006µs, 1.079ms, 1.059ms, 1.187ms, 1.241ms, 1.411ms, 14.873ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 992.607µs +Latencies [min, mean, 50, 90, 95, 99, max] 808.16µs, 1.086ms, 1.064ms, 1.196ms, 1.248ms, 1.42ms, 17.019ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/dp-perf/2.3.0/2.3.0-plus.md b/tests/results/dp-perf/2.3.0/2.3.0-plus.md new file mode 100644 index 0000000000..deaecd90b9 --- /dev/null +++ b/tests/results/dp-perf/2.3.0/2.3.0-plus.md @@ -0,0 +1,90 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Latency looks to have improved slightly. + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 880.439µs +Latencies [min, mean, 50, 90, 95, 99, max] 691.14µs, 886.932µs, 867.964µs, 976.348µs, 1.018ms, 1.153ms, 10.358ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 923.361µs +Latencies [min, mean, 50, 90, 95, 99, max] 726.599µs, 948.386µs, 919.848µs, 1.025ms, 1.07ms, 1.262ms, 22.38ms +Bytes In [total, mean] 4860000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 980.118µs +Latencies [min, mean, 50, 90, 95, 99, max] 741.198µs, 949.099µs, 920.511µs, 1.025ms, 1.067ms, 1.241ms, 19.154ms +Bytes In [total, mean] 5100000, 170.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 997.667µs +Latencies [min, mean, 50, 90, 95, 99, max] 716.164µs, 903.954µs, 881.394µs, 978.714µs, 1.019ms, 1.192ms, 21.825ms +Bytes In [total, mean] 4770000, 159.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.97 +Duration [total, attack, wait] 30.001s, 30s, 919.688µs +Latencies [min, mean, 50, 90, 95, 99, max] 708.879µs, 925.517µs, 903.767µs, 1.012ms, 1.054ms, 1.21ms, 22.009ms +Bytes In [total, mean] 4770000, 159.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/longevity/2.3.0/2.3.0-oss.md b/tests/results/longevity/2.3.0/2.3.0-oss.md new file mode 100644 index 0000000000..392d20b396 --- /dev/null +++ b/tests/results/longevity/2.3.0/2.3.0-oss.md @@ -0,0 +1,83 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 3 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 2 +- RAM per node: 4015672Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: e2-medium + +## Summary: + +- Still a lot of non-2xx or 3xx responses, many more than last time. Socket errors are all mostly read errors, with no write errors and fewer timeout errors. +- We observe a continual increase in NGINX memory usage over time which could indicate a memory leak. Will bring this up with the Agent team. +- CPU usage remained consistent with past results. +- Error contacting TokenReview API, but may be a one-off. + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 190.35ms 141.74ms 2.00s 83.52% + Req/Sec 289.84 187.59 3.52k 63.68% + 195509968 requests in 5760.00m, 66.75GB read + Socket errors: connect 0, read 315485, write 0, timeout 6584 + Non-2xx or 3xx responses: 1763516 +Requests/sec: 565.71 +Transfer/sec: 202.53KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 180.03ms 106.92ms 1.94s 67.25% + Req/Sec 287.34 184.95 1.73k 63.36% + 193842103 requests in 5760.00m, 65.22GB read + Socket errors: connect 0, read 309621, write 0, timeout 1 +Requests/sec: 560.89 +Transfer/sec: 197.88KB +``` +## Key Metrics + +### Containers memory + +![oss-memory.png](oss-memory.png) + +### Containers CPU + +![oss-cpu.png](oss-cpu.png) + +## Error Logs + +### nginx-gateway + +error=rpc error: code = Internal desc = error creating TokenReview: context canceled;level=error;logger=agentGRPCServer;msg=error validating connection;stacktrace=github.com/nginx/nginx-gateway-fabric/v2/internal/controller/nginx/agent/grpc/interceptor.(*ContextSetter).Stream.ContextSetter.Stream.func1 + /opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/internal/controller/nginx/agent/grpc/interceptor/interceptor.go:62 +google.golang.org/grpc.(*Server).processStreamingRPC + /opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/.gocache/google.golang.org/grpc@v1.77.0/server.go:1721 +google.golang.org/grpc.(*Server).handleStream + /opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/.gocache/google.golang.org/grpc@v1.77.0/server.go:1836 +google.golang.org/grpc.(*Server).serveStreams.func2.1 + /opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/.gocache/google.golang.org/grpc@v1.77.0/server.go:1063;ts=2025-12-16T17:35:17Z + +### nginx diff --git a/tests/results/longevity/2.3.0/2.3.0-plus.md b/tests/results/longevity/2.3.0/2.3.0-plus.md new file mode 100644 index 0000000000..bd5acd522c --- /dev/null +++ b/tests/results/longevity/2.3.0/2.3.0-plus.md @@ -0,0 +1,83 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 3 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 2 +- RAM per node: 4015672Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: e2-medium + +## Summary: + +- Consistent traffic results from 2.2. +- We observe a continual increase in NGINX memory usage over time which could indicate a memory leak. Will bring this up with the Agent team. +- CPU usage remained consistent with past results. +- Still get some "no live upstreams" errors. + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 184.82ms 102.91ms 1.45s 65.52% + Req/Sec 284.19 179.74 1.52k 63.62% + 192198367 requests in 5760.00m, 65.91GB read + Socket errors: connect 0, read 0, write 0, timeout 108 + Non-2xx or 3xx responses: 5 +Requests/sec: 556.13 +Transfer/sec: 199.96KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 185.02ms 102.92ms 1.50s 65.52% + Req/Sec 283.70 179.19 1.43k 63.75% + 191866398 requests in 5760.00m, 64.73GB read + Socket errors: connect 0, read 0, write 0, timeout 114 + Non-2xx or 3xx responses: 6 +Requests/sec: 555.17 +Transfer/sec: 196.40KB +``` +## Key Metrics + +### Containers memory + +![oss-memory.png](oss-memory.png) + +### Containers CPU + +![oss-cpu.png](oss-cpu.png) + +## Error Logs + +### nginx-gateway + +### nginx + + + + +10.168.0.90 - - [16/Dec/2025:15:47:08 +0000] "GET /tea HTTP/1.1" 502 150 "-" "-" +2025/12/16 15:47:08 [error] 26#26: *361983622 no live upstreams while connecting to upstream, client: 10.168.0.90, server: cafe.example.com, request: "GET /tea HTTP/1.1", upstream: "http://longevity_tea_80/tea", host: "cafe.example.com" +10.168.0.90 - - [16/Dec/2025:12:49:07 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-" +2025/12/16 12:49:07 [error] 25#25: *350621339 no live upstreams while connecting to upstream, client: 10.168.0.90, server: cafe.example.com, request: "GET /coffee HTTP/1.1", upstream: "http://longevity_coffee_80/coffee", host: "cafe.example.com" diff --git a/tests/results/longevity/2.3.0/oss-cpu.png b/tests/results/longevity/2.3.0/oss-cpu.png new file mode 100644 index 0000000000..a6acea764b Binary files /dev/null and b/tests/results/longevity/2.3.0/oss-cpu.png differ diff --git a/tests/results/longevity/2.3.0/oss-memory.png b/tests/results/longevity/2.3.0/oss-memory.png new file mode 100644 index 0000000000..98cdb61763 Binary files /dev/null and b/tests/results/longevity/2.3.0/oss-memory.png differ diff --git a/tests/results/longevity/2.3.0/plus-cpu.png b/tests/results/longevity/2.3.0/plus-cpu.png new file mode 100644 index 0000000000..fef9957557 Binary files /dev/null and b/tests/results/longevity/2.3.0/plus-cpu.png differ diff --git a/tests/results/longevity/2.3.0/plus-memory.png b/tests/results/longevity/2.3.0/plus-memory.png new file mode 100644 index 0000000000..11edc52a55 Binary files /dev/null and b/tests/results/longevity/2.3.0/plus-memory.png differ diff --git a/tests/results/ngf-upgrade/2.3.0/2.3.0-oss.md b/tests/results/ngf-upgrade/2.3.0/2.3.0-oss.md new file mode 100644 index 0000000000..89725b4094 --- /dev/null +++ b/tests/results/ngf-upgrade/2.3.0/2.3.0-oss.md @@ -0,0 +1,58 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Similar results to 2.2, with a brief interruption in traffic. +- Latency numbers slightly improved. + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.69 +Duration [total, attack, wait] 59.993s, 59.992s, 1.424ms +Latencies [min, mean, 50, 90, 95, 99, max] 513.564µs, 281.334ms, 1.334ms, 147.299ms, 2.817s, 5.097s, 5.652s +Bytes In [total, mean] 915093, 152.52 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.68% +Status Codes [code:count] 0:19 200:5981 +Error Set: +Get "https://cafe.example.com/tea": dial tcp 0.0.0.0:0->10.138.0.126:443: connect: connection refused +``` + +![https-oss.png](https-oss.png) + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.69 +Duration [total, attack, wait] 59.993s, 59.992s, 1.33ms +Latencies [min, mean, 50, 90, 95, 99, max] 671.146µs, 273.786ms, 1.334ms, 47.986ms, 2.714s, 5.05s, 5.616s +Bytes In [total, mean] 952904, 158.82 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.68% +Status Codes [code:count] 0:19 200:5981 +Error Set: +Get "http://cafe.example.com/coffee": dial tcp 0.0.0.0:0->10.138.0.126:80: connect: connection refused +``` + +![http-oss.png](http-oss.png) diff --git a/tests/results/ngf-upgrade/2.3.0/2.3.0-plus.md b/tests/results/ngf-upgrade/2.3.0/2.3.0-plus.md new file mode 100644 index 0000000000..7ff8915f21 --- /dev/null +++ b/tests/results/ngf-upgrade/2.3.0/2.3.0-plus.md @@ -0,0 +1,64 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Similar results to 2.2, with a brief interruption in traffic. +- Latency numbers slightly worse. + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.72 +Duration [total, attack, wait] 59.996s, 59.993s, 2.907ms +Latencies [min, mean, 50, 90, 95, 99, max] 437.91µs, 772.34ms, 1.2ms, 3.677s, 6.591s, 8.884s, 9.443s +Bytes In [total, mean] 959287, 159.88 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.72% +Status Codes [code:count] 0:17 200:5983 +Error Set: +Get "http://cafe.example.com/coffee": read tcp 10.138.0.120:59295->10.138.0.123:80: read: connection reset by peer +Get "http://cafe.example.com/coffee": read tcp 10.138.0.120:37351->10.138.0.123:80: read: connection reset by peer +Get "http://cafe.example.com/coffee": read tcp 10.138.0.120:54213->10.138.0.123:80: read: connection reset by peer +Get "http://cafe.example.com/coffee": dial tcp 0.0.0.0:0->10.138.0.123:80: connect: connection refused +``` + +![http-plus.png](http-plus.png) + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.72 +Duration [total, attack, wait] 59.996s, 59.993s, 2.941ms +Latencies [min, mean, 50, 90, 95, 99, max] 486.152µs, 772.69ms, 1.261ms, 3.642s, 6.543s, 8.883s, 9.441s +Bytes In [total, mean] 921382, 153.56 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.72% +Status Codes [code:count] 0:17 200:5983 +Error Set: +Get "https://cafe.example.com/tea": read tcp 10.138.0.120:55317->10.138.0.123:443: read: connection reset by peer +Get "https://cafe.example.com/tea": read tcp 10.138.0.120:43625->10.138.0.123:443: read: connection reset by peer +Get "https://cafe.example.com/tea": write tcp 10.138.0.120:49103->10.138.0.123:443: write: connection reset by peer +Get "https://cafe.example.com/tea": dial tcp 0.0.0.0:0->10.138.0.123:443: connect: connection refused +``` + +![https-plus.png](https-plus.png) diff --git a/tests/results/ngf-upgrade/2.3.0/http-oss.png b/tests/results/ngf-upgrade/2.3.0/http-oss.png new file mode 100644 index 0000000000..8f262e33a1 Binary files /dev/null and b/tests/results/ngf-upgrade/2.3.0/http-oss.png differ diff --git a/tests/results/ngf-upgrade/2.3.0/http-plus.png b/tests/results/ngf-upgrade/2.3.0/http-plus.png new file mode 100644 index 0000000000..bac41309c7 Binary files /dev/null and b/tests/results/ngf-upgrade/2.3.0/http-plus.png differ diff --git a/tests/results/ngf-upgrade/2.3.0/https-oss.png b/tests/results/ngf-upgrade/2.3.0/https-oss.png new file mode 100644 index 0000000000..8f262e33a1 Binary files /dev/null and b/tests/results/ngf-upgrade/2.3.0/https-oss.png differ diff --git a/tests/results/ngf-upgrade/2.3.0/https-plus.png b/tests/results/ngf-upgrade/2.3.0/https-plus.png new file mode 100644 index 0000000000..bac41309c7 Binary files /dev/null and b/tests/results/ngf-upgrade/2.3.0/https-plus.png differ diff --git a/tests/results/reconfig/2.3.0/2.3.0-oss.md b/tests/results/reconfig/2.3.0/2.3.0-oss.md new file mode 100644 index 0000000000..a0c65a72e7 --- /dev/null +++ b/tests/results/reconfig/2.3.0/2.3.0-oss.md @@ -0,0 +1,110 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- nginx configuration errors are gone +- overall batch numbers have increased, implying even more reconciliation loops + +## Test 1: Resources exist before startup - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 12s + +### Event Batch Processing + +- Event Batch Total: 43 +- Event Batch Processing Average Time: 0ms +- Event Batch Processing distribution: + - 500.0ms: 43 + - 1000.0ms: 43 + - 5000.0ms: 43 + - 10000.0ms: 43 + - 30000.0ms: 43 + - +Infms: 43 + +### NGINX Error Logs + +## Test 1: Resources exist before startup - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 30s + +### Event Batch Processing + +- Event Batch Total: 56 +- Event Batch Processing Average Time: 1ms +- Event Batch Processing distribution: + - 500.0ms: 56 + - 1000.0ms: 56 + - 5000.0ms: 56 + - 10000.0ms: 56 + - 30000.0ms: 56 + - +Infms: 56 + +### NGINX Error Logs + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 24s + +### Event Batch Processing + +- Event Batch Total: 416 +- Event Batch Processing Average Time: 13ms +- Event Batch Processing distribution: + - 500.0ms: 414 + - 1000.0ms: 416 + - 5000.0ms: 416 + - 10000.0ms: 416 + - 30000.0ms: 416 + - +Infms: 416 + +### NGINX Error Logs + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 124s + +### Event Batch Processing + +- Event Batch Total: 1838 +- Event Batch Processing Average Time: 15ms +- Event Batch Processing distribution: + - 500.0ms: 1837 + - 1000.0ms: 1838 + - 5000.0ms: 1838 + - 10000.0ms: 1838 + - 30000.0ms: 1838 + - +Infms: 1838 + +### NGINX Error Logs diff --git a/tests/results/reconfig/2.3.0/2.3.0-plus.md b/tests/results/reconfig/2.3.0/2.3.0-plus.md new file mode 100644 index 0000000000..6ecbea1592 --- /dev/null +++ b/tests/results/reconfig/2.3.0/2.3.0-plus.md @@ -0,0 +1,109 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- overall batch numbers have increased, implying even more reconciliation loops + +## Test 1: Resources exist before startup - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 14s + +### Event Batch Processing + +- Event Batch Total: 42 +- Event Batch Processing Average Time: 3ms +- Event Batch Processing distribution: + - 500.0ms: 42 + - 1000.0ms: 42 + - 5000.0ms: 42 + - 10000.0ms: 42 + - 30000.0ms: 42 + - +Infms: 42 + +### NGINX Error Logs + +## Test 1: Resources exist before startup - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 21s + +### Event Batch Processing + +- Event Batch Total: 54 +- Event Batch Processing Average Time: 3ms +- Event Batch Processing distribution: + - 500.0ms: 54 + - 1000.0ms: 54 + - 5000.0ms: 54 + - 10000.0ms: 54 + - 30000.0ms: 54 + - +Infms: 54 + +### NGINX Error Logs + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 23s + +### Event Batch Processing + +- Event Batch Total: 321 +- Event Batch Processing Average Time: 24ms +- Event Batch Processing distribution: + - 500.0ms: 313 + - 1000.0ms: 318 + - 5000.0ms: 321 + - 10000.0ms: 321 + - 30000.0ms: 321 + - +Infms: 321 + +### NGINX Error Logs + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 121s + +### Event Batch Processing + +- Event Batch Total: 1502 +- Event Batch Processing Average Time: 22ms +- Event Batch Processing distribution: + - 500.0ms: 1477 + - 1000.0ms: 1490 + - 5000.0ms: 1502 + - 10000.0ms: 1502 + - 30000.0ms: 1502 + - +Infms: 1502 + +### NGINX Error Logs diff --git a/tests/results/scale/2.3.0/2.3.0-oss.md b/tests/results/scale/2.3.0/2.3.0-oss.md new file mode 100644 index 0000000000..62a313fbfc --- /dev/null +++ b/tests/results/scale/2.3.0/2.3.0-oss.md @@ -0,0 +1,153 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- fewer errors in the logs overall +- increase in batch events, implying more reconciliation loops + +## Test TestScale_Listeners + +### Event Batch Processing + +- Total: 298 +- Average Time: 10ms +- Event Batch Processing distribution: + - 500.0ms: 297 + - 1000.0ms: 298 + - 5000.0ms: 298 + - 10000.0ms: 298 + - 30000.0ms: 298 + - +Infms: 298 + +### Errors + +- NGF errors: 1 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Event Batch Processing + +- Total: 356 +- Average Time: 10ms +- Event Batch Processing distribution: + - 500.0ms: 356 + - 1000.0ms: 356 + - 5000.0ms: 356 + - 10000.0ms: 356 + - 30000.0ms: 356 + - +Infms: 356 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Event Batch Processing + +- Total: 1252 +- Average Time: 129ms +- Event Batch Processing distribution: + - 500.0ms: 1179 + - 1000.0ms: 1252 + - 5000.0ms: 1252 + - 10000.0ms: 1252 + - 30000.0ms: 1252 + - +Infms: 1252 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Event Batch Processing + +- Total: 106 +- Average Time: 93ms +- Event Batch Processing distribution: + - 500.0ms: 96 + - 1000.0ms: 106 + - 5000.0ms: 106 + - 10000.0ms: 106 + - 30000.0ms: 106 + - +Infms: 106 + +### Errors + +- NGF errors: 1 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 30000, 1000.03, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 1.1ms +Latencies [min, mean, 50, 90, 95, 99, max] 835.852µs, 1.108ms, 1.078ms, 1.232ms, 1.292ms, 1.476ms, 18.915ms +Bytes In [total, mean] 4860000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` +```text +Requests [total, rate, throughput] 30000, 1000.03, 999.99 +Duration [total, attack, wait] 30s, 29.999s, 1.346ms +Latencies [min, mean, 50, 90, 95, 99, max] 956.837µs, 1.239ms, 1.207ms, 1.387ms, 1.458ms, 1.661ms, 21.565ms +Bytes In [total, mean] 4860000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/2.3.0/2.3.0-plus.md b/tests/results/scale/2.3.0/2.3.0-plus.md new file mode 100644 index 0000000000..80f928b912 --- /dev/null +++ b/tests/results/scale/2.3.0/2.3.0-plus.md @@ -0,0 +1,153 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- increase in batch events, implying more reconciliation loops +- similar results otherwise + +## Test TestScale_Listeners + +### Event Batch Processing + +- Total: 249 +- Average Time: 17ms +- Event Batch Processing distribution: + - 500.0ms: 243 + - 1000.0ms: 249 + - 5000.0ms: 249 + - 10000.0ms: 249 + - 30000.0ms: 249 + - +Infms: 249 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Event Batch Processing + +- Total: 312 +- Average Time: 13ms +- Event Batch Processing distribution: + - 500.0ms: 307 + - 1000.0ms: 311 + - 5000.0ms: 312 + - 10000.0ms: 312 + - 30000.0ms: 312 + - +Infms: 312 + +### Errors + +- NGF errors: 1 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Event Batch Processing + +- Total: 1323 +- Average Time: 143ms +- Event Batch Processing distribution: + - 500.0ms: 1308 + - 1000.0ms: 1323 + - 5000.0ms: 1323 + - 10000.0ms: 1323 + - 30000.0ms: 1323 + - +Infms: 1323 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Event Batch Processing + +- Total: 96 +- Average Time: 262ms +- Event Batch Processing distribution: + - 500.0ms: 70 + - 1000.0ms: 93 + - 5000.0ms: 96 + - 10000.0ms: 96 + - 30000.0ms: 96 + - +Infms: 96 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 925.026µs +Latencies [min, mean, 50, 90, 95, 99, max] 788.535µs, 1.011ms, 983.238µs, 1.108ms, 1.167ms, 1.356ms, 13.92ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` +```text +Requests [total, rate, throughput] 30000, 1000.02, 999.98 +Duration [total, attack, wait] 30.001s, 29.999s, 1.06ms +Latencies [min, mean, 50, 90, 95, 99, max] 891.23µs, 1.086ms, 1.061ms, 1.18ms, 1.232ms, 1.407ms, 19.542ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/2.3.0/TestScale_HTTPRoutes/cpu-oss.png b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/cpu-oss.png new file mode 100644 index 0000000000..1f17f1ac9b Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/cpu-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPRoutes/cpu-plus.png b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/cpu-plus.png new file mode 100644 index 0000000000..0d8ecfb630 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/cpu-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPRoutes/memory-oss.png b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/memory-oss.png new file mode 100644 index 0000000000..5b2ead3d13 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/memory-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPRoutes/memory-plus.png b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/memory-plus.png new file mode 100644 index 0000000000..34452705b1 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/memory-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPRoutes/ttr-oss.png b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/ttr-oss.png new file mode 100644 index 0000000000..1ada588445 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/ttr-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPRoutes/ttr-plus.png b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/ttr-plus.png new file mode 100644 index 0000000000..328f8fd672 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPRoutes/ttr-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/cpu-oss.png b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/cpu-oss.png new file mode 100644 index 0000000000..867f70dfde Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/cpu-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/cpu-plus.png b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/cpu-plus.png new file mode 100644 index 0000000000..e9fc43159b Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/cpu-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/memory-oss.png b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/memory-oss.png new file mode 100644 index 0000000000..ee3903b99e Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/memory-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/memory-plus.png b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/memory-plus.png new file mode 100644 index 0000000000..a3fc71bfe6 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/memory-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ngf-oss.log b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ngf-oss.log new file mode 100644 index 0000000000..41c9b70d4f --- /dev/null +++ b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ngf-oss.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-12-12T23:28:29Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gatewayclasses.gateway.networking.k8s.io \"nginx\": the object has been modified; please apply your changes to the latest version and try again","namespace":"","name":"nginx","kind":"GatewayClass"} +{"level":"debug","ts":"2025-12-12T23:29:36Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ngf-plus.log b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ngf-plus.log new file mode 100644 index 0000000000..9ab8243063 --- /dev/null +++ b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ngf-plus.log @@ -0,0 +1 @@ +{"level":"debug","ts":"2025-12-12T23:02:01Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ttr-oss.png b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ttr-oss.png new file mode 100644 index 0000000000..5983a32a1f Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ttr-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ttr-plus.png b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ttr-plus.png new file mode 100644 index 0000000000..3ce844c26c Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_HTTPSListeners/ttr-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/cpu-oss.png b/tests/results/scale/2.3.0/TestScale_Listeners/cpu-oss.png new file mode 100644 index 0000000000..9cb4b72bf6 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_Listeners/cpu-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/cpu-plus.png b/tests/results/scale/2.3.0/TestScale_Listeners/cpu-plus.png new file mode 100644 index 0000000000..88bb3e3b2b Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_Listeners/cpu-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/memory-oss.png b/tests/results/scale/2.3.0/TestScale_Listeners/memory-oss.png new file mode 100644 index 0000000000..af1fa780ee Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_Listeners/memory-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/memory-plus.png b/tests/results/scale/2.3.0/TestScale_Listeners/memory-plus.png new file mode 100644 index 0000000000..1df174f5b6 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_Listeners/memory-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/ngf-oss.log b/tests/results/scale/2.3.0/TestScale_Listeners/ngf-oss.log new file mode 100644 index 0000000000..5edf8622b5 --- /dev/null +++ b/tests/results/scale/2.3.0/TestScale_Listeners/ngf-oss.log @@ -0,0 +1 @@ +{"level":"debug","ts":"2025-12-12T23:26:36Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/ngf-plus.log b/tests/results/scale/2.3.0/TestScale_Listeners/ngf-plus.log new file mode 100644 index 0000000000..3faa46d150 --- /dev/null +++ b/tests/results/scale/2.3.0/TestScale_Listeners/ngf-plus.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-12-12T22:59:12Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2025-12-12T22:59:32Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/ttr-oss.png b/tests/results/scale/2.3.0/TestScale_Listeners/ttr-oss.png new file mode 100644 index 0000000000..6396de6705 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_Listeners/ttr-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_Listeners/ttr-plus.png b/tests/results/scale/2.3.0/TestScale_Listeners/ttr-plus.png new file mode 100644 index 0000000000..e40649b32d Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_Listeners/ttr-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_UpstreamServers/cpu-oss.png b/tests/results/scale/2.3.0/TestScale_UpstreamServers/cpu-oss.png new file mode 100644 index 0000000000..393e376803 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_UpstreamServers/cpu-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_UpstreamServers/cpu-plus.png b/tests/results/scale/2.3.0/TestScale_UpstreamServers/cpu-plus.png new file mode 100644 index 0000000000..8155de88d2 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_UpstreamServers/cpu-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_UpstreamServers/memory-oss.png b/tests/results/scale/2.3.0/TestScale_UpstreamServers/memory-oss.png new file mode 100644 index 0000000000..af0cfe6a1c Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_UpstreamServers/memory-oss.png differ diff --git a/tests/results/scale/2.3.0/TestScale_UpstreamServers/memory-plus.png b/tests/results/scale/2.3.0/TestScale_UpstreamServers/memory-plus.png new file mode 100644 index 0000000000..a5c8fcf349 Binary files /dev/null and b/tests/results/scale/2.3.0/TestScale_UpstreamServers/memory-plus.png differ diff --git a/tests/results/scale/2.3.0/TestScale_UpstreamServers/ngf-oss.log b/tests/results/scale/2.3.0/TestScale_UpstreamServers/ngf-oss.log new file mode 100644 index 0000000000..5ec125727c --- /dev/null +++ b/tests/results/scale/2.3.0/TestScale_UpstreamServers/ngf-oss.log @@ -0,0 +1 @@ +{"level":"debug","ts":"2025-12-12T23:41:17Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.3.0/TestScale_UpstreamServers/ngf-plus.log b/tests/results/scale/2.3.0/TestScale_UpstreamServers/ngf-plus.log new file mode 100644 index 0000000000..6a55cec212 --- /dev/null +++ b/tests/results/scale/2.3.0/TestScale_UpstreamServers/ngf-plus.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-12-12T23:14:28Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gatewayclasses.gateway.networking.k8s.io \"nginx\": the object has been modified; please apply your changes to the latest version and try again","namespace":"","name":"nginx","kind":"GatewayClass"} +{"level":"debug","ts":"2025-12-12T23:14:38Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/zero-downtime-scale/2.3.0/2.3.0-oss.md b/tests/results/zero-downtime-scale/2.3.0/2.3.0-oss.md new file mode 100644 index 0000000000..6306125b36 --- /dev/null +++ b/tests/results/zero-downtime-scale/2.3.0/2.3.0-oss.md @@ -0,0 +1,285 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Latency increase overall + +## One NGINX Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.464ms +Latencies [min, mean, 50, 90, 95, 99, max] 779.563µs, 1.456ms, 1.396ms, 1.698ms, 1.815ms, 2.282ms, 44.725ms +Bytes In [total, mean] 4595990, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-oss.png](gradual-scale-up-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.298ms +Latencies [min, mean, 50, 90, 95, 99, max] 720.788µs, 1.387ms, 1.339ms, 1.639ms, 1.763ms, 2.216ms, 27.813ms +Bytes In [total, mean] 4776056, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-oss.png](gradual-scale-up-affinity-http-oss.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 955.665µs +Latencies [min, mean, 50, 90, 95, 99, max] 747.455µs, 1.321ms, 1.321ms, 1.5ms, 1.566ms, 1.823ms, 41.076ms +Bytes In [total, mean] 7641697, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-http-oss.png](gradual-scale-down-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 1.426ms +Latencies [min, mean, 50, 90, 95, 99, max] 787.618µs, 1.39ms, 1.376ms, 1.556ms, 1.623ms, 1.955ms, 40.524ms +Bytes In [total, mean] 7353670, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-oss.png](gradual-scale-down-affinity-https-oss.png) + +### Scale Up Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.348ms +Latencies [min, mean, 50, 90, 95, 99, max] 741.955µs, 1.33ms, 1.321ms, 1.521ms, 1.591ms, 1.851ms, 59.155ms +Bytes In [total, mean] 1910356, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-http-oss.png](abrupt-scale-up-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.518ms +Latencies [min, mean, 50, 90, 95, 99, max] 819.53µs, 1.415ms, 1.384ms, 1.6ms, 1.687ms, 1.98ms, 59.923ms +Bytes In [total, mean] 1838355, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-oss.png](abrupt-scale-up-affinity-https-oss.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.512ms +Latencies [min, mean, 50, 90, 95, 99, max] 809.533µs, 1.362ms, 1.349ms, 1.52ms, 1.58ms, 1.761ms, 12.114ms +Bytes In [total, mean] 1838386, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-oss.png](abrupt-scale-down-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.339ms +Latencies [min, mean, 50, 90, 95, 99, max] 783.615µs, 1.301ms, 1.307ms, 1.483ms, 1.543ms, 1.703ms, 7.6ms +Bytes In [total, mean] 1910421, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-http-oss.png](abrupt-scale-down-affinity-http-oss.png) + +## Multiple NGINX Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.537ms +Latencies [min, mean, 50, 90, 95, 99, max] 744.494µs, 1.356ms, 1.326ms, 1.534ms, 1.622ms, 2.045ms, 36.468ms +Bytes In [total, mean] 4626044, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-oss.png](gradual-scale-up-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.319ms +Latencies [min, mean, 50, 90, 95, 99, max] 703.359µs, 1.298ms, 1.284ms, 1.477ms, 1.555ms, 1.991ms, 32.196ms +Bytes In [total, mean] 4796995, 159.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-oss.png](gradual-scale-up-http-oss.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.55ms +Latencies [min, mean, 50, 90, 95, 99, max] 693.199µs, 1.306ms, 1.293ms, 1.494ms, 1.571ms, 1.875ms, 45.651ms +Bytes In [total, mean] 15350348, 159.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-oss.png](gradual-scale-down-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.528ms +Latencies [min, mean, 50, 90, 95, 99, max] 757.281µs, 1.362ms, 1.339ms, 1.534ms, 1.616ms, 1.946ms, 47.495ms +Bytes In [total, mean] 14803358, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-oss.png](gradual-scale-down-https-oss.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.309ms +Latencies [min, mean, 50, 90, 95, 99, max] 811.146µs, 1.421ms, 1.359ms, 1.566ms, 1.64ms, 1.938ms, 127.273ms +Bytes In [total, mean] 1850483, 154.21 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-oss.png](abrupt-scale-up-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.152ms +Latencies [min, mean, 50, 90, 95, 99, max] 733.236µs, 1.328ms, 1.294ms, 1.47ms, 1.528ms, 1.77ms, 127.086ms +Bytes In [total, mean] 1918812, 159.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-oss.png](abrupt-scale-up-http-oss.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.285ms +Latencies [min, mean, 50, 90, 95, 99, max] 789.904µs, 1.417ms, 1.362ms, 1.551ms, 1.622ms, 1.859ms, 217.854ms +Bytes In [total, mean] 1850381, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-oss.png](abrupt-scale-down-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.336ms +Latencies [min, mean, 50, 90, 95, 99, max] 762.121µs, 1.374ms, 1.332ms, 1.52ms, 1.59ms, 1.818ms, 217.464ms +Bytes In [total, mean] 1918767, 159.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-http-oss.png](abrupt-scale-down-http-oss.png) diff --git a/tests/results/zero-downtime-scale/2.3.0/2.3.0-plus.md b/tests/results/zero-downtime-scale/2.3.0/2.3.0-plus.md new file mode 100644 index 0000000000..02811a542a --- /dev/null +++ b/tests/results/zero-downtime-scale/2.3.0/2.3.0-plus.md @@ -0,0 +1,289 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00 +- Date: 2025-12-12T20:04:38Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1308000 +- vCPUs per node: 16 +- RAM per node: 65851520Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Sections of dropped traffic when abruptly scaling with NGINX Plus. These weren't present in the most recent `edge` run, so could be intermittent one-off. +- Latency increase overall + +## One NGINX Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 991.258µs +Latencies [min, mean, 50, 90, 95, 99, max] 680.726µs, 1.238ms, 1.228ms, 1.438ms, 1.517ms, 1.798ms, 22.376ms +Bytes In [total, mean] 4626086, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-plus.png](gradual-scale-up-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.197ms +Latencies [min, mean, 50, 90, 95, 99, max] 659.439µs, 1.179ms, 1.174ms, 1.378ms, 1.445ms, 1.678ms, 21.74ms +Bytes In [total, mean] 4802954, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-plus.png](gradual-scale-up-affinity-http-plus.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 1.573ms +Latencies [min, mean, 50, 90, 95, 99, max] 631.557µs, 1.237ms, 1.223ms, 1.455ms, 1.534ms, 1.77ms, 36.655ms +Bytes In [total, mean] 7684890, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-http-plus.png](gradual-scale-down-affinity-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 1.786ms +Latencies [min, mean, 50, 90, 95, 99, max] 698.854µs, 1.291ms, 1.271ms, 1.512ms, 1.593ms, 1.823ms, 39.156ms +Bytes In [total, mean] 7401761, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-plus.png](gradual-scale-down-affinity-https-plus.png) + +### Scale Up Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 83.34 +Duration [total, attack, wait] 2m0s, 2m0s, 1.401ms +Latencies [min, mean, 50, 90, 95, 99, max] 482.708µs, 1.184ms, 1.233ms, 1.504ms, 1.583ms, 1.809ms, 11.639ms +Bytes In [total, mean] 1901033, 158.42 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 83.33% +Status Codes [code:count] 200:10000 502:2000 +Error Set: +502 Bad Gateway +``` + +![abrupt-scale-up-affinity-http-plus.png](abrupt-scale-up-affinity-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.151ms +Latencies [min, mean, 50, 90, 95, 99, max] 730.737µs, 1.306ms, 1.283ms, 1.547ms, 1.635ms, 1.846ms, 11.673ms +Bytes In [total, mean] 1850415, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-plus.png](abrupt-scale-up-affinity-https-plus.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.598ms +Latencies [min, mean, 50, 90, 95, 99, max] 787.786µs, 1.412ms, 1.386ms, 1.64ms, 1.711ms, 1.91ms, 59.855ms +Bytes In [total, mean] 1850435, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-plus.png](abrupt-scale-down-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 92.18 +Duration [total, attack, wait] 2m0s, 2m0s, 1.36ms +Latencies [min, mean, 50, 90, 95, 99, max] 507.819µs, 1.281ms, 1.294ms, 1.57ms, 1.652ms, 1.833ms, 60.126ms +Bytes In [total, mean] 1911756, 159.31 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 92.17% +Status Codes [code:count] 200:11061 502:939 +Error Set: +502 Bad Gateway +``` + +![abrupt-scale-down-affinity-http-plus.png](abrupt-scale-down-affinity-http-plus.png) + +## Multiple NGINX Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.43ms +Latencies [min, mean, 50, 90, 95, 99, max] 647.201µs, 1.171ms, 1.161ms, 1.359ms, 1.43ms, 1.767ms, 26.335ms +Bytes In [total, mean] 4818099, 160.60 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-plus.png](gradual-scale-up-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.251ms +Latencies [min, mean, 50, 90, 95, 99, max] 695.593µs, 1.224ms, 1.211ms, 1.413ms, 1.491ms, 1.776ms, 26.121ms +Bytes In [total, mean] 4646968, 154.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-plus.png](gradual-scale-up-https-plus.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.168ms +Latencies [min, mean, 50, 90, 95, 99, max] 638.326µs, 1.196ms, 1.185ms, 1.395ms, 1.464ms, 1.716ms, 51.332ms +Bytes In [total, mean] 15417583, 160.60 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-plus.png](gradual-scale-down-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 958.252µs +Latencies [min, mean, 50, 90, 95, 99, max] 688.331µs, 1.27ms, 1.258ms, 1.463ms, 1.535ms, 1.786ms, 38.16ms +Bytes In [total, mean] 14870456, 154.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-plus.png](gradual-scale-down-https-plus.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.684ms +Latencies [min, mean, 50, 90, 95, 99, max] 707.159µs, 1.345ms, 1.3ms, 1.619ms, 1.72ms, 1.983ms, 24.916ms +Bytes In [total, mean] 1858834, 154.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-plus.png](abrupt-scale-up-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.34ms +Latencies [min, mean, 50, 90, 95, 99, max] 713.635µs, 1.307ms, 1.267ms, 1.59ms, 1.692ms, 1.932ms, 20.645ms +Bytes In [total, mean] 1927152, 160.60 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-plus.png](abrupt-scale-up-http-plus.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.444ms +Latencies [min, mean, 50, 90, 95, 99, max] 775.553µs, 1.714ms, 1.445ms, 1.773ms, 1.871ms, 2.264ms, 252.329ms +Bytes In [total, mean] 1858855, 154.90 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-plus.png](abrupt-scale-down-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.00 +Duration [total, attack, wait] 2m0s, 2m0s, 1.331ms +Latencies [min, mean, 50, 90, 95, 99, max] 705.454µs, 1.649ms, 1.379ms, 1.725ms, 1.832ms, 2.241ms, 252.445ms +Bytes In [total, mean] 1927165, 160.60 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.99% +Status Codes [code:count] 200:11999 502:1 +Error Set: +502 Bad Gateway +``` + +![abrupt-scale-down-http-plus.png](abrupt-scale-down-http-plus.png) diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..383c9453aa Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..917e9e70d6 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..383c9453aa Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..afca2ba10c Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-http-oss.png new file mode 100644 index 0000000000..c2e8d5dfde Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-http-plus.png new file mode 100644 index 0000000000..c2723628a8 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-https-oss.png new file mode 100644 index 0000000000..c2e8d5dfde Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-https-plus.png new file mode 100644 index 0000000000..5e50741b7a Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..a932b0e84a Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..a86d9501f6 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..a932b0e84a Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..a2c0b7832c Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-http-oss.png new file mode 100644 index 0000000000..24d77930cf Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-http-plus.png new file mode 100644 index 0000000000..0a3a08a480 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-https-oss.png new file mode 100644 index 0000000000..24d77930cf Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-https-plus.png new file mode 100644 index 0000000000..0a3a08a480 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/abrupt-scale-up-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..ed39a761cb Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..29ff8d815b Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..ed39a761cb Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..29ff8d815b Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-http-oss.png new file mode 100644 index 0000000000..ce80c74c51 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-http-plus.png new file mode 100644 index 0000000000..b28b417525 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-https-oss.png new file mode 100644 index 0000000000..ce80c74c51 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-https-plus.png new file mode 100644 index 0000000000..b28b417525 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..1f2856f7a6 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..71cd839792 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..1f2856f7a6 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..71cd839792 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-http-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-http-oss.png new file mode 100644 index 0000000000..0d11508d68 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-http-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-http-plus.png new file mode 100644 index 0000000000..ddf70cd226 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-https-oss.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-https-oss.png new file mode 100644 index 0000000000..0d11508d68 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-https-plus.png b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-https-plus.png new file mode 100644 index 0000000000..ddf70cd226 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.3.0/gradual-scale-up-https-plus.png differ