Skip to content

Commit fca2e3f

Browse files
alexeyrclaude
andcommitted
Fix benchmark workflow non-fast-forward error
The benchmark action was being called multiple times (once for Core, once for Pro), causing Git conflicts when trying to create multiple commits to the benchmark-data branch. Changes: - Updated conversion script to support --append mode for merging results - Merge all metrics (RPS, latencies, failure rate) into single JSON file - Negate RPS values so all metrics use customSmallerIsBetter tool (higher RPS = lower negative value = better performance) - Consolidated to ONE benchmark storage step at the end (was 2 separate steps) - Enable auto-push conditionally (only on push to master) This fixes the "non-fast-forward" Git error by ensuring only one commit to benchmark-data branch per workflow run. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
1 parent aab8d26 commit fca2e3f

File tree

2 files changed

+58
-99
lines changed

2 files changed

+58
-99
lines changed

.github/workflows/benchmark.yml

Lines changed: 18 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -302,40 +302,6 @@ jobs:
302302
run: |
303303
ruby benchmarks/convert_to_benchmark_json.rb "Core: "
304304
305-
- name: Store Core RPS benchmark results
306-
if: env.RUN_CORE
307-
uses: benchmark-action/github-action-benchmark@v1
308-
with:
309-
name: Core Benchmark - RPS
310-
tool: customBiggerIsBetter
311-
output-file-path: bench_results/benchmark_rps.json
312-
gh-pages-branch: benchmark-data
313-
benchmark-data-dir-path: docs/benchmarks
314-
alert-threshold: '150%'
315-
github-token: ${{ secrets.GITHUB_TOKEN }}
316-
comment-on-alert: true
317-
alert-comment-cc-users: '@alexeyr-ci2'
318-
fail-on-alert: true
319-
summary-always: true
320-
auto-push: false
321-
322-
- name: Store Core latency benchmark results
323-
if: env.RUN_CORE
324-
uses: benchmark-action/github-action-benchmark@v1
325-
with:
326-
name: Core Benchmark - Latency
327-
tool: customSmallerIsBetter
328-
output-file-path: bench_results/benchmark_latency.json
329-
gh-pages-branch: benchmark-data
330-
benchmark-data-dir-path: docs/benchmarks
331-
alert-threshold: '150%'
332-
github-token: ${{ secrets.GITHUB_TOKEN }}
333-
comment-on-alert: true
334-
alert-comment-cc-users: '@alexeyr-ci2'
335-
fail-on-alert: true
336-
summary-always: true
337-
auto-push: false
338-
339305
- name: Upload Core benchmark results
340306
uses: actions/upload-artifact@v4
341307
if: env.RUN_CORE && always()
@@ -512,41 +478,7 @@ jobs:
512478
- name: Convert Pro benchmark results to JSON
513479
if: env.RUN_PRO
514480
run: |
515-
ruby benchmarks/convert_to_benchmark_json.rb "Pro: "
516-
517-
- name: Store Pro RPS benchmark results
518-
if: env.RUN_PRO
519-
uses: benchmark-action/github-action-benchmark@v1
520-
with:
521-
name: Pro Benchmark - RPS
522-
tool: customBiggerIsBetter
523-
output-file-path: bench_results/benchmark_rps.json
524-
gh-pages-branch: benchmark-data
525-
benchmark-data-dir-path: docs/benchmarks
526-
alert-threshold: '150%'
527-
github-token: ${{ secrets.GITHUB_TOKEN }}
528-
comment-on-alert: true
529-
alert-comment-cc-users: '@alexeyr-ci2'
530-
fail-on-alert: true
531-
summary-always: true
532-
auto-push: false
533-
534-
- name: Store Pro latency benchmark results
535-
if: env.RUN_PRO
536-
uses: benchmark-action/github-action-benchmark@v1
537-
with:
538-
name: Pro Benchmark - Latency
539-
tool: customSmallerIsBetter
540-
output-file-path: bench_results/benchmark_latency.json
541-
gh-pages-branch: benchmark-data
542-
benchmark-data-dir-path: docs/benchmarks
543-
alert-threshold: '150%'
544-
github-token: ${{ secrets.GITHUB_TOKEN }}
545-
comment-on-alert: true
546-
alert-comment-cc-users: '@alexeyr-ci2'
547-
fail-on-alert: true
548-
summary-always: true
549-
auto-push: false
481+
ruby benchmarks/convert_to_benchmark_json.rb "Pro: " --append
550482
551483
- name: Upload Pro benchmark results
552484
uses: actions/upload-artifact@v4
@@ -566,12 +498,24 @@ jobs:
566498
echo "✅ Server stopped"
567499
568500
# ============================================
569-
# STEP 7: PUSH BENCHMARK DATA
501+
# STEP 7: STORE BENCHMARK DATA
570502
# ============================================
571-
- name: Push benchmark data
572-
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
573-
run: |
574-
git push 'https://github-actions:${{ secrets.GITHUB_TOKEN }}@github.com/${{ github.repository }}.git' benchmark-data:benchmark-data
503+
- name: Store all benchmark results
504+
if: env.RUN_CORE || env.RUN_PRO
505+
uses: benchmark-action/github-action-benchmark@v1
506+
with:
507+
name: React on Rails Benchmarks
508+
tool: customSmallerIsBetter
509+
output-file-path: bench_results/benchmark.json
510+
gh-pages-branch: benchmark-data
511+
benchmark-data-dir-path: docs/benchmarks
512+
alert-threshold: '150%'
513+
github-token: ${{ secrets.GITHUB_TOKEN }}
514+
comment-on-alert: true
515+
alert-comment-cc-users: '@alexeyr-ci2'
516+
fail-on-alert: true
517+
summary-always: true
518+
auto-push: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
575519

576520
# ============================================
577521
# STEP 8: WORKFLOW COMPLETION

benchmarks/convert_to_benchmark_json.rb

Lines changed: 40 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,21 @@
22
# frozen_string_literal: true
33

44
# Converts benchmark summary files to JSON format for github-action-benchmark
5-
# Outputs two files:
6-
# - benchmark_rps.json (customBiggerIsBetter)
7-
# - benchmark_latency.json (customSmallerIsBetter)
5+
# Outputs a single file with all metrics using customSmallerIsBetter:
6+
# - benchmark.json (customSmallerIsBetter)
7+
# - RPS values are negated (so higher RPS = lower negative value = better)
8+
# - Latencies are kept as-is (lower is better)
9+
# - Failed percentage is kept as-is (lower is better)
810
#
9-
# Usage: ruby convert_to_benchmark_json.rb [prefix]
11+
# Usage: ruby convert_to_benchmark_json.rb [prefix] [--append]
1012
# prefix: Optional prefix for benchmark names (e.g., "Core: " or "Pro: ")
13+
# --append: Append to existing benchmark.json instead of overwriting
1114

1215
require "json"
1316

1417
BENCH_RESULTS_DIR = "bench_results"
1518
PREFIX = ARGV[0] || ""
19+
APPEND_MODE = ARGV.include?("--append")
1620

1721
# rubocop:disable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/PerceivedComplexity
1822

@@ -88,22 +92,21 @@ def calculate_failed_percentage(status_str)
8892
(failed.to_f / total * 100).round(2)
8993
end
9094

91-
# Convert results to customBiggerIsBetter format (for RPS)
92-
def to_rps_json(results)
93-
results.map do |r|
94-
{
95-
name: "#{r[:name]} - RPS",
96-
unit: "requests/sec",
97-
value: r[:rps]
98-
}
99-
end
100-
end
101-
102-
# Convert results to customSmallerIsBetter format (for latencies and failure rate)
103-
def to_latency_json(results)
95+
# Convert all results to customSmallerIsBetter format
96+
# RPS is negated (higher RPS = lower negative value = better)
97+
# Latencies and failure rates are kept as-is (lower is better)
98+
def to_unified_json(results)
10499
output = []
105100

106101
results.each do |r|
102+
# Add negated RPS (higher RPS becomes lower negative value, which is better)
103+
output << {
104+
name: "#{r[:name]} - RPS",
105+
unit: "requests/sec (negated)",
106+
value: -r[:rps]
107+
}
108+
109+
# Add latencies (lower is better)
107110
output << {
108111
name: "#{r[:name]} - p50 latency",
109112
unit: "ms",
@@ -119,6 +122,8 @@ def to_latency_json(results)
119122
unit: "ms",
120123
value: r[:p99]
121124
}
125+
126+
# Add failure percentage (lower is better)
122127
output << {
123128
name: "#{r[:name]} - failed requests",
124129
unit: "%",
@@ -147,12 +152,22 @@ def to_latency_json(results)
147152
exit 0
148153
end
149154

150-
# Write RPS JSON (bigger is better)
151-
rps_json = to_rps_json(all_results)
152-
File.write(File.join(BENCH_RESULTS_DIR, "benchmark_rps.json"), JSON.pretty_generate(rps_json))
153-
puts "Wrote #{rps_json.length} RPS metrics to benchmark_rps.json"
155+
# Convert current results to JSON
156+
new_metrics = to_unified_json(all_results)
157+
output_path = File.join(BENCH_RESULTS_DIR, "benchmark.json")
158+
159+
# In append mode, merge with existing metrics
160+
if APPEND_MODE && File.exist?(output_path)
161+
existing_metrics = JSON.parse(File.read(output_path))
162+
unified_json = existing_metrics + new_metrics
163+
puts "Appended #{new_metrics.length} metrics to existing #{existing_metrics.length} metrics"
164+
else
165+
unified_json = new_metrics
166+
puts "Created #{unified_json.length} new metrics"
167+
end
154168

155-
# Write latency/failure JSON (smaller is better)
156-
latency_json = to_latency_json(all_results)
157-
File.write(File.join(BENCH_RESULTS_DIR, "benchmark_latency.json"), JSON.pretty_generate(latency_json))
158-
puts "Wrote #{latency_json.length} latency/failure metrics to benchmark_latency.json"
169+
# Write unified JSON (all metrics using customSmallerIsBetter with negated RPS)
170+
File.write(output_path, JSON.pretty_generate(unified_json))
171+
puts "Wrote #{unified_json.length} total metrics to benchmark.json (from #{all_results.length} benchmark results)"
172+
puts " - RPS values are negated (higher RPS = lower negative value = better)"
173+
puts " - Latencies and failure rates use original values (lower is better)"

0 commit comments

Comments
 (0)