-
-
Notifications
You must be signed in to change notification settings - Fork 636
Add performance tests #1868
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add performance tests #1868
Conversation
WalkthroughAdds a CI benchmarking system: new GitHub Actions workflow, k6/Vegeta benchmark drivers (TypeScript + Ruby), shared helpers and converter, dummy-app production/start scripts and configs, tooling/ignore updates, and docs/contributing updates to run and publish Core and Pro performance tests in CI and locally. (50 words) Changes
Sequence Diagram(s)sequenceDiagram
participant GH as GitHub Actions
participant Runner as CI Runner
participant Tools as Tools (k6, Vegeta, Node, Ruby, pnpm)
participant App as Dummy App (Rails + Node Renderer)
participant Store as Artifacts/Registry
GH->>Runner: trigger (workflow_dispatch / push / labeled PR)
Runner->>Tools: install & restore caches
Runner->>App: prepare apps (build assets / generate packs)
Runner->>App: start prod servers (rails, node-renderer)
Runner->>App: wait_for_server / health checks
Runner->>Tools: run Core benchmarks (k6 per route)
Runner->>Tools: run Pro benchmarks (Vegeta per bundle)
Tools->>App: send HTTP requests → collect metrics
Runner->>Runner: parse & convert summaries to benchmark JSON
Runner->>Store: upload artifacts & publish results
Runner->>GH: post workflow summary / status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
980a424 to
4a01be4
Compare
ec26e73 to
a616cd1
Compare
Code Review - PR #1868: Add Performance TestsThanks for this contribution! This is a solid implementation of performance testing infrastructure. ✅ Strengths
🐛 Critical Issues
|
040c0a5 to
ad56e14
Compare
PR Review: Add Performance Tests (1868)SummaryThis PR adds comprehensive performance testing infrastructure using three industry-standard benchmarking tools (Fortio, Vegeta, and k6). Well-structured implementation addressing issue 1777. Critical Issues1. Missing Trailing NewlinesCRITICAL per CLAUDE.md: Files MUST end with newline character or CI will fail:
2. RuboCop RequiredCRITICAL per CLAUDE.md: Must run Potential BugsK6 JSON Parsing (bench.sh:187-192)The jq expression uses Server Startup Race (benchmark.yml:254)Uses npm vs yarn (benchmark.yml:200)Uses Code Quality Strengths
Suggestions
Test CoverageGood: Infrastructure in place, runs on PR/push, results preserved as artifacts VerdictSolid PR with high code quality. Performance testing infrastructure will be valuable. Required Before Merge:
Recommended:
Great work overall! |
e350a34 to
7b9d621
Compare
PR Review: Add Performance TestsThank you for this comprehensive performance testing implementation! This is a well-structured addition to the repository. Here's my detailed feedback: ✅ Strengths
🐛 Potential Issues1. Script Portability - bc Dependency (bench.sh:21,33)The script uses bc for floating-point comparisons but bc is not installed in the workflow. Fix: Add bc to the apt package installation step in the workflow, or use bash-native arithmetic for integer comparisons. 2. Race Condition in Server Startup (benchmark.yml:248)The server process is backgrounded but there is no guarantee it has started before the polling loop begins. If the server fails to start immediately, the loop might miss error messages. Suggestion: Add a brief sleep 2 before the polling loop, or capture the PID and check if the process is still running during the timeout loop. 3. k6 Status Reporting Edge Case (bench.sh:199-204)If .root_group.checks is empty, add returns null, causing K6_REQS_OTHER calculation to fail. Fix: Add null handling by changing the jq expression to use | add // 0 4. Missing Newline at End of FilesPer CLAUDE.md requirements: ALWAYS ensure files end with a newline character Files needing newlines:
Fix: Run rake autofix or add trailing newlines manually. 5. Benchmark Workflow Runs on Every PR (benchmark.yml:57-58)Running 30-second benchmarks with 3 tools on every PR will consume significant CI minutes and slow down PR feedback. Suggestion:
🔍 Code Quality & Best PracticesSecurity
Performance
Maintainability
📊 Test CoverageMissing:
Suggestions:
🔧 Configuration IssuesPuma Configuration (spec/dummy/config/puma.rb:38)Workflow sets default to 4 workers (WEB_CONCURRENCY: 4), but Puma defaults to 2 if not set. The workflow default should match. Consistency check needed: Verify that WEB_CONCURRENCY=4 is intentional for CI runners with 2 CPU cores. 📝 Documentation & ChangelogPer CLAUDE.md: Update CHANGELOG.md for user-visible changes only (features, bug fixes, breaking changes, deprecations, performance improvements) This PR qualifies as it adds a performance testing feature. Needs:
Missing:
🎯 RecommendationsBefore Merge (Critical):
Nice to Have: 🎓 Overall AssessmentThis is a solid, production-ready implementation of performance testing. The code quality is high, with excellent error handling and configurability. The main concerns are:
Recommendation: Request changes for the critical items (newlines, changelog, PR trigger), then approve. Score: 8/10 - Great implementation, minor issues to address. |
7b9d621 to
f2b9ef8
Compare
Pull Request Review: Add Performance TestsThank you for this comprehensive performance testing implementation! This is a valuable addition to the project. Below is my detailed review: ✅ Strengths
🔍 Issues and ConcernsCritical Issues
High Priority Issues
Medium Priority Issues
🔒 Security Considerations
🧪 Test CoverageMissing:
Recommendations:
📝 Suggested ChangesMust Fix (Blocks Merge)
Should Fix (High Priority)
Nice to Have
✅ ChangelogPer CLAUDE.md, this should be added to CHANGELOG.md as it's a new feature. 🎯 Overall AssessmentThis is a solid foundation for performance testing with good multi-tool coverage and configurability. The main blockers are CLAUDE.md compliance issues (trailing newlines, RuboCop). Once those are fixed and the workflow triggers are adjusted to avoid running on every PR, this will be ready to merge. Estimated effort to address critical issues: ~30 minutes Great work overall! 🚀 |
c58f480 to
43f8858
Compare
Code Review: Add Performance TestsThank you for this comprehensive performance testing addition! This is a well-structured PR that addresses issue #1777. ✅ Strengths1. Excellent Tool Coverage
2. Robust Error Handling
3. Production-Ready Configuration
🔒 Security Concerns - CRITICALSSH Access Feature (lines 96-119 in .github/workflows/benchmark.yml)This poses significant security risks: Issues:
Recommendations:
🐛 Potential Bugs1. Division by Zero Risk (spec/performance/bench.sh:195-197)If K6_REQS_KNOWN_STATUS is null/empty, this could fail. Add validation. 2. Race Condition in Server Startup (spec/dummy/bin/prod)The rails s command starts but there is no guarantee assets are fully loaded. Consider adding a readiness check that validates asset serving. 3. Missing Error ContextWhen benchmarks fail, the error message does not indicate which tool failed. Consider wrapping each tool execution with proper error context. 4. Hardcoded Paths (spec/performance/bench.sh:87)If run from wrong directory, this could create directories in unexpected locations. Consider using SCRIPT_DIR to make paths relative to the script location. ⚡ Performance Considerations
🧪 Test Coverage - Missing
📝 Documentation - REQUIREDMissing per CLAUDE.md requirements:
🎨 Code Quality SuggestionsGood Practices:
Suggestions:
📋 Action Items Summary🔴 High Priority (Before Merge):
🟡 Medium Priority (Before Merge):
🟢 Low Priority (Future Enhancement):
✨ Overall AssessmentThis is a solid foundation for performance testing! The multi-tool approach and comprehensive workflow setup show good engineering practices. Main concerns:
Recommendation: Request changes for security and documentation issues, then approve after those are addressed. Great work on this comprehensive testing infrastructure! |
be4af9e to
564b8b6
Compare
- Add convert_to_benchmark_json.rb script to convert benchmark results to JSON format compatible with github-action-benchmark - Track RPS (customBiggerIsBetter) and latency/failure % (customSmallerIsBetter) - Exclude max latencies (not stable enough for regression detection) - Alert threshold set to 150% (50% regression) - Store benchmark data in docs/benchmarks on master branch - Enable job summary for all runs (PRs and pushes) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The benchmark action was being called multiple times (once for Core, once for Pro), causing Git conflicts when trying to create multiple commits to the benchmark-data branch. Changes: - Updated conversion script to support --append mode for merging results - Merge all metrics (RPS, latencies, failure rate) into single JSON file - Negate RPS values so all metrics use customSmallerIsBetter tool (higher RPS = lower negative value = better performance) - Consolidated to ONE benchmark storage step at the end (was 2 separate steps) - Enable auto-push conditionally (only on push to master) This fixes the "non-fast-forward" Git error by ensuring only one commit to benchmark-data branch per workflow run. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
fca2e3f to
3b6c790
Compare
Code Review: Performance Testing InfrastructureI've reviewed PR #1868 implementing the performance testing framework. This is substantial and well-structured. Here's my feedback: ✅ StrengthsArchitecture & Design:
Code Quality:
🔍 Issues & Concerns1. Security: Password Reading (HIGH) - benchmarks/bench-node-renderer.rb:28 2. Path Hardcoding - Multiple locations assume repo root execution 3. Port Check Race - .github/workflows/benchmark.yml:323-334 4. Missing Tests - Benchmark scripts lack unit tests 5. Puma Config Difference - Core has conditional production config, Pro always enables workers 🎯 RecommendationsPerformance:
Best Practices:
📊 Overall AssessmentExcellent work! Infrastructure is well-architected, documented, and production-ready. Issues are mostly minor improvements except password handling and path assumptions. None are blockers. Recommendation: ✅ Approve with suggested improvements Great job! Very valuable for regression detection. 🎉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
react_on_rails_pro/spec/dummy/config/puma.rb (1)
1-55: Fix spacing violation on line 14.Line 14 has excessive spaces between
portandENV.fetch:port ENV.fetch(...)should have standard single spacing like the other similar lines (16, 27). Runbundle exec rubocop --auto-correct-allto fix this and any other violations before committing.
♻️ Duplicate comments (2)
benchmarks/bench-node-renderer.rb (2)
197-214: Consider using safer command execution patterns for Vegeta.While the test case names are currently hardcoded and safe, the system calls use string interpolation with shell redirection (e.g.,
> #{vegeta_bin}). This was noted in a previous review comment. For improved safety and future-proofing, consider using the array form ofsystem()with explicit file redirection.This concern was already raised in a previous review comment for lines 197-214.
261-261: Consider more robust BASE_URL parsing.The current parsing using
BASE_URL.split(":").firstand.last.to_icould fail with IPv6 addresses or unexpected formats. This was noted in a previous review comment. Consider using URI parsing for better robustness.This concern was already raised in a previous review comment for line 261.
🧹 Nitpick comments (6)
react_on_rails_pro/spec/dummy/bin/prod-assets (1)
1-9: Consider adding error handling flags for production scripts.The script lacks bash error handling flags that would catch failures early. For production asset builds, consider adding these standard safety flags:
#!/usr/bin/env bash +set -euo pipefail export NODE_ENV=production export RAILS_ENV=productionThis ensures:
set -e: Exit immediately if any command failsset -u: Treat undefined variables as errorsset -o pipefail: Catch failures in piped commandsreact_on_rails/spec/dummy/bin/prod-assets (1)
1-2: Consider adding error handling flags for production scripts.Similar to the Pro version, this script lacks bash error handling flags. Consider adding:
#!/usr/bin/env bash +set -euo pipefail export NODE_ENV=productionThis ensures failures are caught early rather than silently continuing with incomplete builds.
react_on_rails_pro/spec/dummy/config/environments/production.rb (1)
72-73: Consider adding a comment explaining why deprecations are silenced.A previous review raised concerns about silencing deprecations. While those concerns are valid for general production use, performance benchmarks have different requirements:
- Deprecation warnings can skew performance metrics
- Benchmark environments should focus purely on performance
- Development/test environments are better suited for catching deprecations
Consider adding a comment to clarify the intent:
- # Don't log any deprecations. + # Don't log deprecations in benchmark/performance test environment. + # Deprecation warnings can affect performance metrics and should be caught in dev/test environments. config.active_support.report_deprecations = falseThis documents the rationale for future maintainers.
react_on_rails/spec/dummy/bin/prod (2)
19-22: Handle case wherefindcommand fails or produces no output.The current implementation redirects stderr to
/dev/nulland usesgrep -q .to check for matches. However, if thefindcommand fails (e.g., directory doesn't exist), the error is silently ignored. Consider adding explicit directory existence checks.-if find client config -type f \( -name "*.[jt]s" -o -name "*.[jt]sx" \) -newer "$MANIFEST" 2>/dev/null | grep -q .; then +if [ -d "client" ] || [ -d "config" ]; then + if find client config -type f \( -name "*.[jt]s" -o -name "*.[jt]sx" \) -newer "$MANIFEST" 2>/dev/null | grep -q .; then + echo "WARNING: client or config has changes newer than compiled assets" + echo "Consider running ./bin/prod-assets to rebuild" + fi +else echo "WARNING: client or config has changes newer than compiled assets" - echo "Consider running ./bin/prod-assets to rebuild" + echo "Note: client or config directories not found" fi
29-29: Consider making port configurable via environment variable.The port
3001is hardcoded. Consider using${PORT:-3001}to allow configuration while maintaining the default.-NODE_ENV=production RAILS_ENV=production bundle exec rails server -p 3001 +NODE_ENV=production RAILS_ENV=production bundle exec rails server -p ${PORT:-3001}benchmarks/bench.rb (1)
50-55: Consider documenting the sanitization logic for maintainability.The
sanitize_route_namefunction removes many characters but the regex pattern is complex. Consider adding a comment explaining which characters are being removed and why (GitHub Actions artifacts, shell metacharacters, etc.).# Sanitize route name for use in filenames -# Removes characters that GitHub Actions disallows in artifacts and shell metacharacters +# Removes characters that: +# - GitHub Actions disallows in artifact names: " : < > | * ? \r \n +# - Are shell metacharacters: $ ` ; & # ! ( ) [ ] { } def sanitize_route_name(route) name = strip_optional_params(route).gsub(%r{^/}, "").tr("/", "_") name = "root" if name.empty? - # Replace invalid characters: " : < > | * ? \r \n $ ` ; & ( ) [ ] { } ! # name.gsub(/[":.<>|*?\r\n$`;&#!()\[\]{}]+/, "_").squeeze("_").gsub(/^_|_$/, "") end
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlreact_on_rails/spec/dummy/Gemfile.lockis excluded by!**/*.lockreact_on_rails_pro/Gemfile.lockis excluded by!**/*.lockreact_on_rails_pro/spec/dummy/Gemfile.lockis excluded by!**/*.lock
📒 Files selected for processing (28)
.github/workflows/benchmark.yml(1 hunks).gitignore(1 hunks).prettierignore(1 hunks)CHANGELOG.md(1 hunks)CLAUDE.md(1 hunks)CONTRIBUTING.md(1 hunks)benchmarks/bench-node-renderer.rb(1 hunks)benchmarks/bench.rb(1 hunks)benchmarks/convert_to_benchmark_json.rb(1 hunks)benchmarks/k6.ts(1 hunks)benchmarks/lib/benchmark_helpers.rb(1 hunks)docs/planning/library-benchmarking.md(1 hunks)knip.ts(1 hunks)package.json(1 hunks)react_on_rails/spec/dummy/bin/prod(1 hunks)react_on_rails/spec/dummy/bin/prod-assets(1 hunks)react_on_rails/spec/dummy/config/puma.rb(1 hunks)react_on_rails/spec/dummy/config/shakapacker.yml(1 hunks)react_on_rails_pro/.prettierignore(1 hunks)react_on_rails_pro/Gemfile.development_dependencies(0 hunks)react_on_rails_pro/spec/dummy/Procfile.prod(1 hunks)react_on_rails_pro/spec/dummy/bin/prod(1 hunks)react_on_rails_pro/spec/dummy/bin/prod-assets(1 hunks)react_on_rails_pro/spec/dummy/config/database.yml(1 hunks)react_on_rails_pro/spec/dummy/config/environments/production.rb(4 hunks)react_on_rails_pro/spec/dummy/config/puma.rb(2 hunks)react_on_rails_pro/spec/dummy/config/shakapacker.yml(1 hunks)react_on_rails_pro/spec/dummy/lib/tasks/assets.rake(1 hunks)
💤 Files with no reviewable changes (1)
- react_on_rails_pro/Gemfile.development_dependencies
🚧 Files skipped from review as they are similar to previous changes (11)
- react_on_rails_pro/spec/dummy/bin/prod
- react_on_rails_pro/spec/dummy/config/database.yml
- .prettierignore
- benchmarks/lib/benchmark_helpers.rb
- knip.ts
- react_on_rails_pro/.prettierignore
- docs/planning/library-benchmarking.md
- react_on_rails_pro/spec/dummy/Procfile.prod
- react_on_rails/spec/dummy/config/shakapacker.yml
- CLAUDE.md
- react_on_rails_pro/spec/dummy/lib/tasks/assets.rake
🧰 Additional context used
📓 Path-based instructions (6)
**/*.rb
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rb: Always runbundle exec rubocopand fix ALL violations before every commit/push
Use RuboCop as the sole authority for formatting Ruby files
Runbundle exec rubocopwith auto-fix enabled (--auto-corrector--auto-correct-all) before every commit
Use snake_case for Ruby variable names and method names
Files:
react_on_rails/spec/dummy/config/puma.rbreact_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/config/puma.rbbenchmarks/convert_to_benchmark_json.rbbenchmarks/bench.rbbenchmarks/bench-node-renderer.rb
**/*.{js,jsx,ts,tsx,json,css,scss,md}
📄 CodeRabbit inference engine (CLAUDE.md)
Use Prettier as the sole authority for formatting non-Ruby files
Files:
benchmarks/k6.tsCHANGELOG.mdCONTRIBUTING.mdpackage.json
**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{js,ts,jsx,tsx}: Use ESLint for JavaScript/TypeScript linting and fix all violations before pushing
Use camelCase for JavaScript/TypeScript variable names and import names
Always use async/await for promises in JavaScript/TypeScript code instead of .then() chains
Files:
benchmarks/k6.ts
CHANGELOG.md
📄 CodeRabbit inference engine (CLAUDE.md)
Update
/CHANGELOG.mdfor open-source features, bug fixes, breaking changes, and performance improvements
Files:
CHANGELOG.md
package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Use
pnpmas the JavaScript package manager, never usenpmoryarn
Files:
package.json
{package.json,Rakefile,**/webpack.config.js,config/**/*.js}
📄 CodeRabbit inference engine (CLAUDE.md)
Validate that no hardcoded paths need updating after resolving merge conflicts in build configuration files
Files:
package.json
🧠 Learnings (25)
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to react_on_rails_pro/**/*.{js,ts,jsx,tsx,json,css,scss} : The `react_on_rails_pro/` directory has its own Prettier/ESLint configuration and will be linted separately in CI
Applied to files:
react_on_rails/spec/dummy/config/puma.rbreact_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodCHANGELOG.mdCONTRIBUTING.mdreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets.gitignore
📚 Learning: 2025-04-26T21:55:55.874Z
Learnt from: alexeyr-ci2
Repo: shakacode/react_on_rails PR: 1732
File: spec/dummy/client/app-react16/startup/ReduxSharedStoreApp.client.jsx:40-44
Timestamp: 2025-04-26T21:55:55.874Z
Learning: In the react_on_rails project, files under `app-react16` directories are copied/moved to corresponding `/app` directories during the conversion process (removing the `-react16` suffix), which affects their relative import paths at runtime.
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodCHANGELOG.mdreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets
📚 Learning: 2025-09-16T08:01:11.146Z
Learnt from: justin808
Repo: shakacode/react_on_rails PR: 1770
File: lib/generators/react_on_rails/templates/base/base/app/javascript/src/HelloWorld/ror_components/HelloWorld.client.jsx:2-2
Timestamp: 2025-09-16T08:01:11.146Z
Learning: React on Rails uses webpack CSS Modules configuration with namedExports: true, which requires the import syntax `import * as style from './file.module.css'` rather than the default export pattern. This configuration enables better tree shaking and bundle size optimization for CSS modules.
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Create webpack debugging scripts in the dummy app root to inspect actual webpack configuration when encountering webpack/Shakapacker issues, then remove debug scripts before committing
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Exclude directories `/coverage`, `/tmp`, `/gen-examples`, `/packages/react-on-rails/lib`, `/node_modules`, and generated directories from IDE indexing to prevent slowdowns
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets.gitignore
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to lib/react_on_rails/**/*.rb : Validate RBS signatures with `bundle exec rake rbs:validate` and run Steep type checker with `bundle exec rake rbs:steep` for Ruby files
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodCHANGELOG.mdCONTRIBUTING.mdreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets.gitignorebenchmarks/bench.rbbenchmarks/bench-node-renderer.rb
📚 Learning: 2025-02-13T14:29:49.267Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: spec/react_on_rails/utils_spec.rb:218-218
Timestamp: 2025-02-13T14:29:49.267Z
Learning: In RSpec tests, prefer using local variables over constants within test blocks to avoid constant redefinition warnings and maintain better test isolation.
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rb
📚 Learning: 2025-10-23T17:22:01.074Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1875
File: lib/react_on_rails/utils.rb:112-124
Timestamp: 2025-10-23T17:22:01.074Z
Learning: In React on Rails, when Pro is installed but not licensed, the intended behavior is to raise an error on boot. The `react_on_rails_pro?` method validates licenses and should raise errors early (including during path resolution in methods like `server_bundle?`) to enforce licensing requirements rather than failing later with obscure errors.
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails/spec/dummy/bin/prodCHANGELOG.md
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to **/webpack.config.js : For CSS Modules in Shakapacker 9.0+, override `namedExport: true` default to `false` in webpack config if using `import styles from './file.module.css'` syntax
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails_pro/spec/dummy/config/shakapacker.yml
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to {package.json,Rakefile,**/webpack.config.js,config/**/*.js} : Validate that no hardcoded paths need updating after resolving merge conflicts in build configuration files
Applied to files:
react_on_rails_pro/spec/dummy/config/environments/production.rbreact_on_rails/spec/dummy/bin/prodreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets
📚 Learning: 2025-02-12T16:38:06.537Z
Learnt from: Romex91
Repo: shakacode/react_on_rails PR: 1697
File: package-scripts.yml:28-28
Timestamp: 2025-02-12T16:38:06.537Z
Learning: The file `node_package/lib/ReactOnRails.full.js` is autogenerated during the build process and should not be present in the repository.
Applied to files:
react_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodreact_on_rails_pro/spec/dummy/config/shakapacker.ymlreact_on_rails/spec/dummy/bin/prod-assets.gitignore
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to lib/react_on_rails/**/*.rb : Add RBS type signatures for new Ruby files in `lib/react_on_rails/` by creating corresponding `.rbs` files in `sig/react_on_rails/`
Applied to files:
react_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodreact_on_rails/spec/dummy/bin/prod-assets.gitignore
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to react_on_rails/spec/dummy/e2e/playwright/app_commands/**/*.rb : When creating app commands for Playwright tests, place them in `e2e/playwright/app_commands/` with Ruby files that use `CypressOnRails::SmartFactoryWrapper` configuration
Applied to files:
react_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prodreact_on_rails/spec/dummy/bin/prod-assets.gitignore
📚 Learning: 2025-07-08T05:57:29.630Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1745
File: node_package/src/RSCRequestTracker.ts:8-14
Timestamp: 2025-07-08T05:57:29.630Z
Learning: The global `generateRSCPayload` function in React on Rails Pro (RORP) is provided by the framework during rendering requests, not implemented in application code. The `declare global` statements are used to document the expected interface that RORP will inject at runtime.
Applied to files:
react_on_rails_pro/spec/dummy/bin/prod-assets
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to **/*.rb : Always run `bundle exec rubocop` and fix ALL violations before every commit/push
Applied to files:
react_on_rails_pro/spec/dummy/bin/prod-assetsreact_on_rails/spec/dummy/bin/prod-assets
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to **/*.rb : Run `bundle exec rubocop` with auto-fix enabled (`--auto-correct` or `--auto-correct-all`) before every commit
Applied to files:
react_on_rails_pro/spec/dummy/bin/prod-assets
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to CHANGELOG.md : Update `/CHANGELOG.md` for open-source features, bug fixes, breaking changes, and performance improvements
Applied to files:
CHANGELOG.mdCONTRIBUTING.md
📚 Learning: 2024-12-12T13:07:09.929Z
Learnt from: alexeyr-ci
Repo: shakacode/react_on_rails PR: 1644
File: node_package/src/ReactOnRailsRSC.ts:87-87
Timestamp: 2024-12-12T13:07:09.929Z
Learning: When handling errors in 'node_package/src/ReactOnRailsRSC.ts', include the error stack in error messages in development and test environments to aid debugging.
Applied to files:
CHANGELOG.md
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation in React on Rails Pro is handled through a chain of validations:
1. Pro version check in `run_stream_inside_fiber`
2. RSC support check during pack generation via `ReactOnRailsPro.configuration.enable_rsc_support`
3. RSC support validation during component registration
This makes additional validation in the helper methods unnecessary.
Applied to files:
CHANGELOG.md
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to CHANGELOG_PRO.md : Update `/CHANGELOG_PRO.md` for Pro-only features, bug fixes, and performance improvements
Applied to files:
CONTRIBUTING.md
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Ensure all files end with a newline character before pushing
Applied to files:
.github/workflows/benchmark.ymlbenchmarks/bench.rbbenchmarks/bench-node-renderer.rb
📚 Learning: 2025-01-23T18:20:45.824Z
Learnt from: alexeyr-ci
Repo: shakacode/react_on_rails PR: 1687
File: spec/dummy/package.json:0-0
Timestamp: 2025-01-23T18:20:45.824Z
Learning: When adding or updating dependencies in spec/dummy/package.json, maintain version consistency with other package.json files in the codebase to avoid potential version conflicts.
Applied to files:
react_on_rails_pro/spec/dummy/config/shakapacker.ymlpackage.json
📚 Learning: 2025-12-16T03:39:07.980Z
Learnt from: CR
Repo: shakacode/react_on_rails PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-16T03:39:07.980Z
Learning: Applies to react_on_rails/spec/dummy/e2e/playwright/**/*.spec.js : In Playwright E2E tests, use `app('clean')` in `beforeEach` to ensure clean database state, and leverage Rails helpers (appFactories, appEval) instead of UI setup
Applied to files:
.gitignore
📚 Learning: 2025-04-09T12:56:10.756Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1696
File: node_package/src/RSCPayloadContainer.ts:0-0
Timestamp: 2025-04-09T12:56:10.756Z
Learning: In the react_on_rails codebase, RSC payloads are already stringified using `JSON.stringify()` before being processed by the `escapeScript` function, which handles escaping of special characters. The function only needs to handle specific HTML markers like comments and closing script tags.
Applied to files:
benchmarks/bench.rbbenchmarks/bench-node-renderer.rb
📚 Learning: 2025-09-29T14:45:42.687Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1833
File: lib/react_on_rails/dev/process_manager.rb:72-83
Timestamp: 2025-09-29T14:45:42.687Z
Learning: In Ruby bundler contexts, when bundler intercepts system commands for executables not in the Gemfile, both version checks (like `system("foreman", "--version")`) and execution commands (like `system("foreman", "start", ...)`) fail equally, both returning false. This means availability checks using version flags accurately reflect whether execution commands will work in the current bundler context.
Applied to files:
benchmarks/bench-node-renderer.rb
🧬 Code graph analysis (2)
benchmarks/bench.rb (1)
benchmarks/lib/benchmark_helpers.rb (10)
env_or_default(12-15)add_summary_line(54-58)validate_rate(18-23)validate_positive_integer(25-29)validate_duration(31-35)check_required_tools(101-105)wait_for_server(73-98)parse_json_file(38-46)failure_metrics(49-51)server_responding?(61-70)
benchmarks/bench-node-renderer.rb (2)
benchmarks/lib/benchmark_helpers.rb (11)
env_or_default(12-15)add_summary_line(54-58)parse_json_file(38-46)failure_metrics(49-51)validate_rate(18-23)validate_positive_integer(25-29)validate_duration(31-35)check_required_tools(101-105)print_params(113-118)print_separator(108-110)display_summary(121-124)benchmarks/bench.rb (1)
add_to_summary(28-30)
🪛 GitHub Actions: Assets Precompile Check
react_on_rails/spec/dummy/config/puma.rb
[error] 1-1: Precompile command failed with exit code $PRECOMPILE_EXIT. Command: RAILS_ENV=production bin/rake assets:precompile 2>&1 | tee precompile_output.txt
react_on_rails/spec/dummy/bin/prod
[error] 1-1: Precompile command failed with exit code $PRECOMPILE_EXIT. Command: RAILS_ENV=production bin/rake assets:precompile 2>&1 | tee precompile_output.txt
react_on_rails/spec/dummy/bin/prod-assets
[error] 1-1: Precompile command failed with exit code $PRECOMPILE_EXIT. Command: RAILS_ENV=production bin/rake assets:precompile 2>&1 | tee precompile_output.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: dummy-app-integration-tests (3.4, 22, latest)
- GitHub Check: dummy-app-integration-tests (3.2, 20, minimum)
- GitHub Check: examples (3.2, minimum)
- GitHub Check: examples (3.4, latest)
- GitHub Check: rspec-package-tests (3.2, minimum)
- GitHub Check: rspec-package-tests (3.4, latest)
- GitHub Check: build-dummy-app-webpack-test-bundles
- GitHub Check: pro-lint-js-and-ruby
- GitHub Check: build
- GitHub Check: build-dummy-app-webpack-test-bundles
- GitHub Check: check-bundle-size
- GitHub Check: benchmark
- GitHub Check: claude-review
- GitHub Check: markdown-link-check
🔇 Additional comments (14)
package.json (1)
33-33: LGTM! Appropriate dependency for k6 benchmarks.The
@types/k6package provides TypeScript type definitions for the k6 performance testing tool, which aligns with the benchmark infrastructure added in this PR.react_on_rails_pro/spec/dummy/config/shakapacker.yml (1)
11-14: LGTM! Configuration updates align with Shakapacker standards.The changes update the cache path to use Shakapacker-specific naming and enable consistent versioning checks, which helps prevent version mismatches between the gem and node package.
.gitignore (1)
79-80: LGTM! Appropriate ignore entry for benchmark artifacts.Adding
/bench_resultsto gitignore prevents benchmark output from being committed, which is standard practice for generated test artifacts.CONTRIBUTING.md (1)
593-660: LGTM! Comprehensive benchmarking documentation.The new Benchmarking section provides clear instructions for:
- When benchmarks run automatically (master, labeled PRs)
- Manual triggering with
gh workflow run- Regression detection behavior
- Local execution for both Core and Pro
The instructions include proper prerequisites, setup steps, and configuration options. The note about running the server in a separate terminal (line 629) is a good practical alternative to the backgrounding approach.
react_on_rails_pro/spec/dummy/config/environments/production.rb (3)
6-8: LGTM! Appropriate secret handling for test/dummy app.The hardcoded secret with a clear "testing only" warning is appropriate for the dummy application used in benchmarks and tests. The ENV fetch allows overriding when needed.
25-27: LGTM! Correct asset handling approach.Disabling Sprockets-based compression is appropriate since Webpack/Shakapacker handles JS/CSS compression in this setup.
49-51: LGTM! Appropriate production log level.Setting log level to
:infowith a comment about avoiding PII exposure is a sensible production default.react_on_rails/spec/dummy/config/puma.rb (2)
13-13: LGTM! Good refactoring for DRY principle.Introducing the
rails_envvariable eliminates repetition and provides a single source of truth for environment checks throughout the configuration.
31-62: LGTM! Proper production configuration for benchmarks.The production-specific block implements standard Puma best practices:
- Worker processes for parallel request handling
preload_app!for copy-on-write memory optimizationon_worker_bootreconnects ActiveRecord (essential when usingpreload_app!)- Graceful shutdown timeout
This configuration is appropriate for the production benchmark environment introduced in this PR.
react_on_rails/spec/dummy/bin/prod-assets (1)
1-10: Theprod-assetsscript is correctly configured and both the precompile-check and benchmark CI workflows have comprehensive error handling and logging in place. No evidence of an actual asset precompilation failure exists in the current codebase. The CI workflows explicitly validate precompilation success with timeouts and artifact uploads for debugging if needed.Likely an incorrect or invalid review comment.
benchmarks/k6.ts (1)
1-80: LGTM! K6 benchmark script is well-structured.The k6 benchmark script has good environment-driven configuration, proper input validation, and comprehensive status code checks. The previous concern about NaN handling has been addressed with the fallback to 50.
.github/workflows/benchmark.yml (1)
1-540: LGTM! Comprehensive benchmark workflow with proper structure.The GitHub Actions workflow is well-organized with:
- Proper conditional execution based on triggers and labels
- Good caching strategies for tools and dependencies
- Server readiness checks before benchmarking
- Proper artifact upload and retention
- Clear workflow summary and error handling
The previous line ending issue (CRLF vs LF) has been addressed.
CHANGELOG.md (1)
30-30: Fix username link to include GitHub URL.The username link should include the full GitHub URL. Per coding guidelines, the format should be
[username](https://github.com/username).-- **Benchmarking in CI**: A benchmark workflow will now run on all pushes to master, as well as PRs with `benchmark` or `full-ci` labels. [PR 1868](https://github.com/shakacode/react_on_rails/pull/1868) by [alexeyr-ci2](https://github.com/alexeyr-ci2) +- **Benchmarking in CI**: A benchmark workflow will now run on all pushes to master, as well as PRs with `benchmark` or `full-ci` labels. [PR 1868](https://github.com/shakacode/react_on_rails/pull/1868) by [alexeyr-ci2](https://github.com/alexeyr-ci2)⛔ Skipped due to learnings
Learnt from: CR Repo: shakacode/react_on_rails PR: 0 File: CLAUDE.md:0-0 Timestamp: 2025-12-16T03:39:07.980Z Learning: Applies to CHANGELOG.md : Update `/CHANGELOG.md` for open-source features, bug fixes, breaking changes, and performance improvementsLearnt from: CR Repo: shakacode/react_on_rails PR: 0 File: CLAUDE.md:0-0 Timestamp: 2025-12-16T03:39:07.980Z Learning: Applies to lib/react_on_rails/**/*.rb : Validate RBS signatures with `bundle exec rake rbs:validate` and run Steep type checker with `bundle exec rake rbs:steep` for Ruby filesbenchmarks/bench.rb (1)
1-224: Verify RuboCop compliance for Ruby files.Per coding guidelines, all Ruby files must run
bundle exec rubocopand fix all violations before committing. The script appears to have addressed previous shell injection concerns with proper use ofShellwords.escape.Run the following to verify:
#!/bin/bash bundle exec rubocop benchmarks/bench.rb
| #!/usr/bin/env ruby | ||
| # frozen_string_literal: true | ||
|
|
||
| # Benchmark script for React on Rails Pro Node Renderer | ||
| # Uses Vegeta with HTTP/2 Cleartext (h2c) support | ||
|
|
||
| require "English" | ||
| require "open3" | ||
| require "socket" | ||
| require_relative "lib/benchmark_helpers" | ||
|
|
||
| # Read configuration from source files | ||
| def read_protocol_version | ||
| package_json_path = File.expand_path( | ||
| "../packages/react-on-rails-pro-node-renderer/package.json", | ||
| __dir__ | ||
| ) | ||
| package_json = JSON.parse(File.read(package_json_path)) | ||
| package_json["protocolVersion"] || raise("protocolVersion not found in #{package_json_path}") | ||
| end | ||
|
|
||
| def read_password_from_config | ||
| config_path = File.expand_path( | ||
| "../react_on_rails_pro/spec/dummy/client/node-renderer.js", | ||
| __dir__ | ||
| ) | ||
| config_content = File.read(config_path) | ||
| match = config_content.match(/password:\s*['"]([^'"]+)['"]/) | ||
| match ? match[1] : raise("password not found in #{config_path}") | ||
| end | ||
|
|
||
| # Benchmark parameters | ||
| PASSWORD = read_password_from_config | ||
| BASE_URL = env_or_default("BASE_URL", "localhost:3800") | ||
| PROTOCOL_VERSION = read_protocol_version | ||
|
|
||
| # Test cases: JavaScript expressions to evaluate | ||
| # Format: { name: "test_name", request: "javascript_code", rsc: true/false } | ||
| # rsc: true means the test requires an RSC bundle, false means non-RSC bundle | ||
| TEST_CASES = [ | ||
| { name: "simple_eval", rsc: false, request: "2+2" }, | ||
| { | ||
| name: "react_ssr", | ||
| rsc: false, | ||
| request: "ReactOnRails.serverRenderReactComponent(" \ | ||
| '{name:"HelloWorld",props:{helloWorldData:{name:"Benchmark"}},domNodeId:"app"})' | ||
| } | ||
| ].freeze | ||
|
|
||
| # Benchmark configuration | ||
| RATE = env_or_default("RATE", "max") | ||
| CONNECTIONS = env_or_default("CONNECTIONS", 10).to_i | ||
| MAX_CONNECTIONS = env_or_default("MAX_CONNECTIONS", CONNECTIONS).to_i | ||
| DURATION = env_or_default("DURATION", "30s") | ||
| REQUEST_TIMEOUT = env_or_default("REQUEST_TIMEOUT", "60s") | ||
|
|
||
| OUTDIR = "bench_results" | ||
| SUMMARY_TXT = "#{OUTDIR}/node_renderer_summary.txt".freeze | ||
|
|
||
| # Local wrapper for add_summary_line to use local constant | ||
| def add_to_summary(*parts) | ||
| add_summary_line(SUMMARY_TXT, *parts) | ||
| end | ||
|
|
||
| # Find all production bundles in the node-renderer bundles directory | ||
| def find_all_production_bundles | ||
| bundles_dir = File.expand_path( | ||
| "../react_on_rails_pro/spec/dummy/.node-renderer-bundles", | ||
| __dir__ | ||
| ) | ||
|
|
||
| unless Dir.exist?(bundles_dir) | ||
| raise "Node renderer bundles directory not found: #{bundles_dir}\n" \ | ||
| "Make sure the Pro dummy app has been compiled with NODE_ENV=production" | ||
| end | ||
|
|
||
| # Bundle directories have format: <hash>-<environment> (e.g., 623229694671afc1ac9137f2715bb654-production) | ||
| # Filter to only include production bundles with hash-like names | ||
| bundles = Dir.children(bundles_dir).select do |entry| | ||
| File.directory?(File.join(bundles_dir, entry)) && | ||
| entry.match?(/^[a-f0-9]+-production$/) | ||
| end | ||
|
|
||
| raise "No production bundles found in #{bundles_dir}" if bundles.empty? | ||
|
|
||
| bundles | ||
| end | ||
|
|
||
| # Check if a bundle is an RSC bundle by evaluating ReactOnRails.isRSCBundle | ||
| # Returns true/false/nil (nil means couldn't determine) | ||
| # rubocop:disable Style/ReturnNilInPredicateMethodDefinition | ||
| def rsc_bundle?(bundle_timestamp) | ||
| url = render_url(bundle_timestamp, "rsc_check") | ||
| body = render_body("ReactOnRails.isRSCBundle") | ||
|
|
||
| # Use curl with h2c since Net::HTTP doesn't support HTTP/2 | ||
| result, status = Open3.capture2( | ||
| "curl", "-s", "--http2-prior-knowledge", "-X", "POST", | ||
| "-H", "Content-Type: application/x-www-form-urlencoded", | ||
| "-d", body, | ||
| url | ||
| ) | ||
| return nil unless status.success? | ||
|
|
||
| # The response should be "true" or "false" | ||
| result.strip == "true" | ||
| rescue StandardError => e | ||
| puts " Warning: Could not determine RSC status for #{bundle_timestamp}: #{e.message}" | ||
| nil | ||
| end | ||
| # rubocop:enable Style/ReturnNilInPredicateMethodDefinition | ||
|
|
||
| # Categorize bundles into RSC and non-RSC | ||
| # Stops early once we find one of each type | ||
| def categorize_bundles(bundles) | ||
| rsc_bundle = nil | ||
| non_rsc_bundle = nil | ||
|
|
||
| bundles.each do |bundle| | ||
| # Stop if we already have both types | ||
| break if rsc_bundle && non_rsc_bundle | ||
|
|
||
| puts " Checking bundle #{bundle}..." | ||
| is_rsc = rsc_bundle?(bundle) | ||
| if is_rsc.nil? | ||
| puts " Could not determine bundle type, skipping" | ||
| elsif is_rsc | ||
| puts " RSC bundle" | ||
| rsc_bundle ||= bundle | ||
| else | ||
| puts " Non-RSC bundle" | ||
| non_rsc_bundle ||= bundle | ||
| end | ||
| end | ||
|
|
||
| [rsc_bundle, non_rsc_bundle] | ||
| end | ||
|
|
||
| # URL-encode special characters for form body | ||
| def url_encode(str) | ||
| URI.encode_www_form_component(str) | ||
| end | ||
|
|
||
| # Build render URL for a bundle and render name | ||
| def render_url(bundle_timestamp, render_name) | ||
| "http://#{BASE_URL}/bundles/#{bundle_timestamp}/render/#{render_name}" | ||
| end | ||
|
|
||
| # Build request body for a rendering request | ||
| def render_body(rendering_request) | ||
| [ | ||
| "protocolVersion=#{url_encode(PROTOCOL_VERSION)}", | ||
| "password=#{url_encode(PASSWORD)}", | ||
| "renderingRequest=#{url_encode(rendering_request)}" | ||
| ].join("&") | ||
| end | ||
|
|
||
| # rubocop:disable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/MethodLength, Metrics/PerceivedComplexity | ||
|
|
||
| # Run Vegeta benchmark for a single test case | ||
| def run_vegeta_benchmark(test_case, bundle_timestamp) | ||
| name = test_case[:name] | ||
| request = test_case[:request] | ||
|
|
||
| puts "\n===> Vegeta h2c: #{name}" | ||
|
|
||
| target_url = render_url(bundle_timestamp, name) | ||
| body = render_body(request) | ||
|
|
||
| # Create temp files for Vegeta | ||
| targets_file = "#{OUTDIR}/#{name}_vegeta_targets.txt" | ||
| body_file = "#{OUTDIR}/#{name}_vegeta_body.txt" | ||
| vegeta_bin = "#{OUTDIR}/#{name}_vegeta.bin" | ||
| vegeta_json = "#{OUTDIR}/#{name}_vegeta.json" | ||
| vegeta_txt = "#{OUTDIR}/#{name}_vegeta.txt" | ||
|
|
||
| # Write body file | ||
| File.write(body_file, body) | ||
|
|
||
| # Write targets file (Vegeta format with @body reference) | ||
| File.write(targets_file, <<~TARGETS) | ||
| POST #{target_url} | ||
| Content-Type: application/x-www-form-urlencoded | ||
| @#{body_file} | ||
| TARGETS | ||
|
|
||
| # Configure Vegeta arguments for max rate | ||
| is_max_rate = RATE == "max" | ||
| vegeta_args = | ||
| if is_max_rate | ||
| ["-rate=0", "-workers=#{CONNECTIONS}", "-max-workers=#{CONNECTIONS}"] | ||
| else | ||
| ["-rate=#{RATE}", "-workers=#{CONNECTIONS}", "-max-workers=#{MAX_CONNECTIONS}"] | ||
| end | ||
|
|
||
| # Run Vegeta attack with h2c | ||
| vegeta_cmd = [ | ||
| "vegeta", "attack", | ||
| "-targets=#{targets_file}", | ||
| *vegeta_args, | ||
| "-duration=#{DURATION}", | ||
| "-timeout=#{REQUEST_TIMEOUT}", | ||
| "-h2c", # HTTP/2 Cleartext (required for node renderer) | ||
| "-max-body=0", | ||
| "> #{vegeta_bin}" | ||
| ].join(" ") | ||
|
|
||
| raise "Vegeta attack failed for #{name}" unless system(vegeta_cmd) | ||
|
|
||
| # Generate text report (display and save) | ||
| raise "Vegeta text report failed" unless system("vegeta report #{vegeta_bin} | tee #{vegeta_txt}") | ||
|
|
||
| # Generate JSON report | ||
| raise "Vegeta JSON report failed" unless system("vegeta report -type=json #{vegeta_bin} > #{vegeta_json}") | ||
|
|
||
| # Delete the large binary file to save disk space | ||
| FileUtils.rm_f(vegeta_bin) | ||
|
|
||
| # Parse results | ||
| vegeta_data = parse_json_file(vegeta_json, "Vegeta") | ||
| vegeta_rps = vegeta_data["throughput"]&.round(2) || "missing" | ||
| vegeta_p50 = vegeta_data.dig("latencies", "50th")&./(1_000_000.0)&.round(2) || "missing" | ||
| vegeta_p90 = vegeta_data.dig("latencies", "90th")&./(1_000_000.0)&.round(2) || "missing" | ||
| vegeta_p99 = vegeta_data.dig("latencies", "99th")&./(1_000_000.0)&.round(2) || "missing" | ||
| vegeta_max = vegeta_data.dig("latencies", "max")&./(1_000_000.0)&.round(2) || "missing" | ||
| vegeta_status = vegeta_data["status_codes"]&.map { |k, v| "#{k}=#{v}" }&.join(",") || "missing" | ||
|
|
||
| [vegeta_rps, vegeta_p50, vegeta_p90, vegeta_p99, vegeta_max, vegeta_status] | ||
| rescue StandardError => e | ||
| puts "Error: #{e.message}" | ||
| failure_metrics(e) | ||
| end | ||
|
|
||
| # rubocop:enable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/MethodLength, Metrics/PerceivedComplexity | ||
|
|
||
| # Main execution | ||
|
|
||
| # Validate parameters | ||
| validate_rate(RATE) | ||
| validate_positive_integer(CONNECTIONS, "CONNECTIONS") | ||
| validate_positive_integer(MAX_CONNECTIONS, "MAX_CONNECTIONS") | ||
| validate_duration(DURATION, "DURATION") | ||
| validate_duration(REQUEST_TIMEOUT, "REQUEST_TIMEOUT") | ||
|
|
||
| if RATE == "max" && CONNECTIONS != MAX_CONNECTIONS | ||
| raise "For RATE=max, CONNECTIONS must equal MAX_CONNECTIONS (got #{CONNECTIONS} and #{MAX_CONNECTIONS})" | ||
| end | ||
|
|
||
| # Check required tools | ||
| check_required_tools(%w[vegeta curl column tee]) | ||
|
|
||
| # Wait for node renderer to be ready | ||
| # Note: Node renderer only speaks HTTP/2, but we can still check with a simple GET | ||
| # that will fail - we just check it doesn't refuse connection | ||
| puts "\nWaiting for node renderer at #{BASE_URL}..." | ||
| start_time = Time.now | ||
| timeout_sec = 60 | ||
| loop do | ||
| # Try a simple TCP connection to check if server is up | ||
|
|
||
| Socket.tcp(BASE_URL.split(":").first, BASE_URL.split(":").last.to_i, connect_timeout: 5, &:close) | ||
| puts " Node renderer is accepting connections" | ||
| break | ||
| rescue StandardError => e | ||
| elapsed = Time.now - start_time | ||
| puts " Attempt at #{elapsed.round(2)}s: #{e.message}" | ||
| raise "Node renderer at #{BASE_URL} not responding within #{timeout_sec}s" if elapsed > timeout_sec | ||
|
|
||
| sleep 1 | ||
| end | ||
|
|
||
| # Find and categorize bundles | ||
| puts "\nDiscovering and categorizing bundles..." | ||
| all_bundles = find_all_production_bundles | ||
| puts "Found #{all_bundles.length} production bundle(s)" | ||
| rsc_bundle, non_rsc_bundle = categorize_bundles(all_bundles) | ||
|
|
||
| rsc_tests = TEST_CASES.select { |tc| tc[:rsc] } | ||
| non_rsc_tests = TEST_CASES.reject { |tc| tc[:rsc] } | ||
|
|
||
| if rsc_tests.any? && rsc_bundle.nil? | ||
| puts "Warning: RSC tests requested but no RSC bundle found, skipping: #{rsc_tests.map { |tc| tc[:name] }.join(', ')}" | ||
| rsc_tests = [] | ||
| end | ||
|
|
||
| if non_rsc_tests.any? && non_rsc_bundle.nil? | ||
| skipped = non_rsc_tests.map { |tc| tc[:name] }.join(", ") | ||
| puts "Warning: Non-RSC tests requested but no non-RSC bundle found, skipping: #{skipped}" | ||
| non_rsc_tests = [] | ||
| end | ||
|
|
||
| # Print parameters | ||
| print_params( | ||
| "BASE_URL" => BASE_URL, | ||
| "RSC_BUNDLE" => rsc_bundle || "none", | ||
| "NON_RSC_BUNDLE" => non_rsc_bundle || "none", | ||
| "RATE" => RATE, | ||
| "DURATION" => DURATION, | ||
| "REQUEST_TIMEOUT" => REQUEST_TIMEOUT, | ||
| "CONNECTIONS" => CONNECTIONS, | ||
| "MAX_CONNECTIONS" => MAX_CONNECTIONS, | ||
| "RSC_TESTS" => rsc_tests.map { |tc| tc[:name] }.join(", ").then { |s| s.empty? ? "none" : s }, | ||
| "NON_RSC_TESTS" => non_rsc_tests.map { |tc| tc[:name] }.join(", ").then { |s| s.empty? ? "none" : s } | ||
| ) | ||
|
|
||
| # Create output directory | ||
| FileUtils.mkdir_p(OUTDIR) | ||
|
|
||
| # Initialize summary file | ||
| File.write(SUMMARY_TXT, "") | ||
| add_to_summary("Test", "Bundle", "RPS", "p50(ms)", "p90(ms)", "p99(ms)", "max(ms)", "Status") | ||
|
|
||
| # Run non-RSC benchmarks | ||
| non_rsc_tests.each do |test_case| | ||
| print_separator | ||
| puts "Benchmarking (non-RSC): #{test_case[:name]}" | ||
| puts " Bundle: #{non_rsc_bundle}" | ||
| puts " Request: #{test_case[:request]}" | ||
| print_separator | ||
|
|
||
| metrics = run_vegeta_benchmark(test_case, non_rsc_bundle) | ||
| add_to_summary(test_case[:name], "non-RSC", *metrics) | ||
| end | ||
|
|
||
| # Run RSC benchmarks | ||
| rsc_tests.each do |test_case| | ||
| print_separator | ||
| puts "Benchmarking (RSC): #{test_case[:name]}" | ||
| puts " Bundle: #{rsc_bundle}" | ||
| puts " Request: #{test_case[:request]}" | ||
| print_separator | ||
|
|
||
| metrics = run_vegeta_benchmark(test_case, rsc_bundle) | ||
| add_to_summary(test_case[:name], "RSC", *metrics) | ||
| end | ||
|
|
||
| # Display summary | ||
| display_summary(SUMMARY_TXT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd /repo && bundle exec rubocop benchmarks/bench-node-renderer.rb --format compactRepository: shakacode/react_on_rails
Length of output: 123
🏁 Script executed:
#!/bin/bash
# First find the repository root
pwd
ls -laRepository: shakacode/react_on_rails
Length of output: 4271
🏁 Script executed:
#!/bin/bash
bundle exec rubocop benchmarks/bench-node-renderer.rb --format compactRepository: shakacode/react_on_rails
Length of output: 114
🏁 Script executed:
#!/bin/bash
# Check if Gemfile exists and try to find Ruby/bundle
ls -la Gemfile
which ruby
ruby --version
cat GemfileRepository: shakacode/react_on_rails
Length of output: 478
🏁 Script executed:
#!/bin/bash
# Check ruby gems and try to run rubocop directly
gem list rubocop
cat react_on_rails/Gemfile | head -20Repository: shakacode/react_on_rails
Length of output: 760
🏁 Script executed:
#!/bin/bash
ruby -S rubocop benchmarks/bench-node-renderer.rb --format compactRepository: shakacode/react_on_rails
Length of output: 2538
🏁 Script executed:
#!/bin/bash
ruby -S rubocop benchmarks/bench-node-renderer.rbRepository: shakacode/react_on_rails
Length of output: 807
Fix RuboCop Naming/PredicateMethod violation on line 92.
Method rsc_bundle? returns nil in some cases, but the predicate method naming convention (ending with ?) suggests it should only return true/false. Either rename the method to remove the ? suffix (e.g., check_rsc_bundle) or add Naming/PredicateMethod to the rubocop:disable comment alongside the existing Style/ReturnNilInPredicateMethodDefinition suppression.
🤖 Prompt for AI Agents
In benchmarks/bench-node-renderer.rb (lines 1-338): the predicate method
rsc_bundle? can return nil which violates RuboCop Naming/PredicateMethod; to
fix, keep the method name but update the rubocop disable directive above its
definition to also disable Naming/PredicateMethod (i.e., add
Naming/PredicateMethod alongside Style/ReturnNilInPredicateMethodDefinition) and
likewise include it in the corresponding rubocop:enable comment after the method
so the lint suppression is balanced.
| #!/usr/bin/env ruby | ||
| # frozen_string_literal: true | ||
|
|
||
| # Converts benchmark summary files to JSON format for github-action-benchmark | ||
| # Outputs a single file with all metrics using customSmallerIsBetter: | ||
| # - benchmark.json (customSmallerIsBetter) | ||
| # - RPS values are negated (so higher RPS = lower negative value = better) | ||
| # - Latencies are kept as-is (lower is better) | ||
| # - Failed percentage is kept as-is (lower is better) | ||
| # | ||
| # Usage: ruby convert_to_benchmark_json.rb [prefix] [--append] | ||
| # prefix: Optional prefix for benchmark names (e.g., "Core: " or "Pro: ") | ||
| # --append: Append to existing benchmark.json instead of overwriting | ||
|
|
||
| require "json" | ||
|
|
||
| BENCH_RESULTS_DIR = "bench_results" | ||
| PREFIX = ARGV[0] || "" | ||
| APPEND_MODE = ARGV.include?("--append") | ||
|
|
||
| # rubocop:disable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/PerceivedComplexity | ||
|
|
||
| # Parse a summary file and return array of hashes with metrics | ||
| # Expected format (tab-separated): | ||
| # Route RPS p50(ms) p90(ms) p99(ms) max(ms) Status | ||
| # or for node renderer: | ||
| # Test Bundle RPS p50(ms) p90(ms) p99(ms) max(ms) Status | ||
| def parse_summary_file(file_path, prefix: "") | ||
| return [] unless File.exist?(file_path) | ||
|
|
||
| lines = File.readlines(file_path).map(&:strip).reject(&:empty?) | ||
| return [] if lines.length < 2 | ||
|
|
||
| header = lines.first.split("\t") | ||
| results = [] | ||
|
|
||
| lines[1..].each do |line| | ||
| cols = line.split("\t") | ||
| row = header.zip(cols).to_h | ||
|
|
||
| # Determine the name based on available columns | ||
| name = row["Route"] || row["Test"] || "unknown" | ||
| bundle_suffix = row["Bundle"] ? " (#{row['Bundle']})" : "" | ||
| full_name = "#{prefix}#{name}#{bundle_suffix}" | ||
|
|
||
| # Skip if we got FAILED values | ||
| next if row["RPS"] == "FAILED" | ||
|
|
||
| # Parse numeric values | ||
| rps = row["RPS"]&.to_f | ||
| p50 = row["p50(ms)"]&.to_f | ||
| p90 = row["p90(ms)"]&.to_f | ||
| p99 = row["p99(ms)"]&.to_f | ||
|
|
||
| # Calculate failed percentage from Status column | ||
| failed_pct = calculate_failed_percentage(row["Status"]) | ||
|
|
||
| results << { | ||
| name: full_name, | ||
| rps: rps, | ||
| p50: p50, | ||
| p90: p90, | ||
| p99: p99, | ||
| failed_pct: failed_pct | ||
| } | ||
| end | ||
|
|
||
| results | ||
| end | ||
|
|
||
| # rubocop:enable Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/PerceivedComplexity | ||
|
|
||
| # Calculate failed request percentage from status string | ||
| # Status format: "200=7508,302=100,5xx=10" etc. | ||
| def calculate_failed_percentage(status_str) | ||
| return 0.0 if status_str.nil? || status_str == "missing" | ||
|
|
||
| total = 0 | ||
| failed = 0 | ||
|
|
||
| status_str.split(",").each do |part| | ||
| code, count = part.split("=") | ||
| count = count.to_i | ||
| total += count | ||
|
|
||
| # Consider 0 (for Vegeta), 4xx and 5xx as failures, also "other" | ||
| failed += count if code.match?(/^[045]/) || code == "other" | ||
| end | ||
|
|
||
| return 0.0 if total.zero? | ||
|
|
||
| (failed.to_f / total * 100).round(2) | ||
| end | ||
|
|
||
| # Convert all results to customSmallerIsBetter format | ||
| # RPS is negated (higher RPS = lower negative value = better) | ||
| # Latencies and failure rates are kept as-is (lower is better) | ||
| def to_unified_json(results) | ||
| output = [] | ||
|
|
||
| results.each do |r| | ||
| # Add negated RPS (higher RPS becomes lower negative value, which is better) | ||
| output << { | ||
| name: "#{r[:name]} - RPS", | ||
| unit: "requests/sec (negated)", | ||
| value: -r[:rps] | ||
| } | ||
|
|
||
| # Add latencies (lower is better) | ||
| output << { | ||
| name: "#{r[:name]} - p50 latency", | ||
| unit: "ms", | ||
| value: r[:p50] | ||
| } | ||
| output << { | ||
| name: "#{r[:name]} - p90 latency", | ||
| unit: "ms", | ||
| value: r[:p90] | ||
| } | ||
| output << { | ||
| name: "#{r[:name]} - p99 latency", | ||
| unit: "ms", | ||
| value: r[:p99] | ||
| } | ||
|
|
||
| # Add failure percentage (lower is better) | ||
| output << { | ||
| name: "#{r[:name]} - failed requests", | ||
| unit: "%", | ||
| value: r[:failed_pct] | ||
| } | ||
| end | ||
|
|
||
| output | ||
| end | ||
|
|
||
| # Main execution | ||
| all_results = [] | ||
|
|
||
| # Parse Rails benchmark | ||
| rails_summary = File.join(BENCH_RESULTS_DIR, "summary.txt") | ||
| all_results.concat(parse_summary_file(rails_summary, prefix: PREFIX)) if File.exist?(rails_summary) | ||
|
|
||
| # Parse Node Renderer benchmark | ||
| node_renderer_summary = File.join(BENCH_RESULTS_DIR, "node_renderer_summary.txt") | ||
| if File.exist?(node_renderer_summary) | ||
| all_results.concat(parse_summary_file(node_renderer_summary, prefix: "#{PREFIX}NodeRenderer: ")) | ||
| end | ||
|
|
||
| if all_results.empty? | ||
| puts "No benchmark results found to convert" | ||
| exit 0 | ||
| end | ||
|
|
||
| # Convert current results to JSON | ||
| new_metrics = to_unified_json(all_results) | ||
| output_path = File.join(BENCH_RESULTS_DIR, "benchmark.json") | ||
|
|
||
| # In append mode, merge with existing metrics | ||
| if APPEND_MODE && File.exist?(output_path) | ||
| existing_metrics = JSON.parse(File.read(output_path)) | ||
| unified_json = existing_metrics + new_metrics | ||
| puts "Appended #{new_metrics.length} metrics to existing #{existing_metrics.length} metrics" | ||
| else | ||
| unified_json = new_metrics | ||
| puts "Created #{unified_json.length} new metrics" | ||
| end | ||
|
|
||
| # Write unified JSON (all metrics using customSmallerIsBetter with negated RPS) | ||
| File.write(output_path, JSON.pretty_generate(unified_json)) | ||
| puts "Wrote #{unified_json.length} total metrics to benchmark.json (from #{all_results.length} benchmark results)" | ||
| puts " - RPS values are negated (higher RPS = lower negative value = better)" | ||
| puts " - Latencies and failure rates use original values (lower is better)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
bundle exec rubocop benchmarks/convert_to_benchmark_json.rbRepository: shakacode/react_on_rails
Length of output: 114
🏁 Script executed:
# Find RuboCop configuration
find . -maxdepth 2 -name ".rubocop.yml" -o -name ".rubocop.yaml" -o -name "rubocop.yml"
# Check file for basic issues
wc -l benchmarks/convert_to_benchmark_json.rb
# Look at the file structure
head -50 benchmarks/convert_to_benchmark_json.rbRepository: shakacode/react_on_rails
Length of output: 1966
🏁 Script executed:
# Check RuboCop configuration
cat .rubocop.yml | head -100
# Get the rest of the file
tail -n +51 benchmarks/convert_to_benchmark_json.rbRepository: shakacode/react_on_rails
Length of output: 5167
🏁 Script executed:
# Count the exact lines of parse_summary_file method (lines 28-69)
sed -n '28,69p' benchmarks/convert_to_benchmark_json.rb | wc -l
# Check for any double-quote violations or obvious formatting issues
grep -n "'" benchmarks/convert_to_benchmark_json.rb | head -20Repository: shakacode/react_on_rails
Length of output: 136
Fix RuboCop violations before committing.
This script has two RuboCop violations that must be corrected:
- Metrics/MethodLength: The
parse_summary_filemethod is 42 lines, exceeding the max of 41. Consider extracting logic into helper methods. - Style/StringLiterals: Line 43 uses single quotes (
row['Bundle']) but the project enforces double quotes. Change torow["Bundle"].
The existing RuboCop disables (Metrics/AbcSize, Metrics/CyclomaticComplexity, Metrics/PerceivedComplexity) do not cover these violations. After fixing, verify with bundle exec rubocop benchmarks/convert_to_benchmark_json.rb.
Summary
Add performance tests. Fixes #1777.
Pull Request checklist
[ ] Add/update test to cover these changesThis change is
Summary by CodeRabbit
New Features
Configuration
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.