improve readme and update minimal example#198
improve readme and update minimal example#198miparnisari wants to merge 13 commits intobenchmark-action:masterfrom
Conversation
| ``` | ||
|
|
||
| ## Screenshots | ||
|
|
There was a problem hiding this comment.
IMO, this introduction has some value to users to understand the tool. If we don't want to distract in the main doc, we can move it to separated doc and link it here. Completely deleting it may create the gap and user experience regression.
There was a problem hiding this comment.
The screenshots are still there, but I moved them to the very top (except this one https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/tooltip.png which I feel adds no value - people using this tool will notice on their own about this, and people that don't yet know this tool will not care about this detail).
Please checkout the readme locally and see how it looks
| Please add a step to push the branch to the remote. | ||
|
|
||
|
|
||
| ### Tool specific setup |
There was a problem hiding this comment.
Same. These have value to users.
There was a problem hiding this comment.
I know, but this section is repeated thrice.
- https://github.com/benchmark-action/github-action-benchmark?tab=readme-ov-file#github-action-for-continuous-benchmarking
- and here https://github.com/benchmark-action/github-action-benchmark?tab=readme-ov-file#tool-specific-setup
- and here https://github.com/benchmark-action/github-action-benchmark?tab=readme-ov-file#examples
So I put everything in one place (the table).
Please checkout the readme locally and see how it looks
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughModified the minimal GitHub Actions workflow: removed Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant GH as GitHub Actions Runner
participant Repo as Repository
participant Cache as Cache Store
participant Bench as benchmark-action
Dev->>GH: push / trigger workflow
GH->>Repo: actions/checkout@v6
GH->>GH: actions/setup-go@v6
GH->>GH: run `go test ./examples/go -bench 'BenchmarkFib' | tee output.txt`
GH->>Bench: run benchmark-action/github-action-benchmark@v1 (output-file-path: output.txt, external-data-json-path: ./cache/benchmark-data.json, fail-on-alert: true, auto-push: "true")
Bench->>Cache: read ./cache/benchmark-data.json (compare historical data)
Bench->>GH: emit alert / fail job if regression detected
GH->>Cache: actions/cache/save@v5 (store ./cache/benchmark-data.json with key `${{ github.sha }}-${{ runner.os }}-go-benchmark`)
GH->>Dev: workflow result / configured notifications
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered ☂️ |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/minimal.yml:
- Around line 23-29: The workflow downloads benchmark-data.json via the
"Download previous benchmark data" step using actions/cache/restore@v4 but never
saves updates and will fail on first run due to fail-on-cache-miss: true; update
that step to remove or set fail-on-cache-miss to false and add a new "Save
benchmark data" step after the job that runs actions/cache/save@v4 with the same
path (./cache/benchmark-data.json) and key (${{ runner.os }}-benchmark) so
updated benchmark-data.json is persisted between runs; ensure the restore and
save steps use identical key and path values and place the save step after the
job that generates/updates benchmark-data.json.
In `@README.md`:
- Around line 160-162: Replace the incorrect branding "Github Actions" with the
proper "GitHub Actions" in the README text (look for the string "Github Actions"
in the paragraph mentioning Job Summaries and the `summary-always` option) so
the documentation uses the correct capitalization "GitHub Actions".
- Around line 111-116: The workflow example uses mixed cache action
versions—`actions/cache/restore@v4` and `actions/cache/save@v3`—which is
inconsistent; update the save action to the same major version as the restore
action (or vice versa) so both use the same major release (e.g., change
`actions/cache/save@v3` to `actions/cache/save@v4`), and ensure any other
`actions/cache/*` usages in the file match that major version for consistency.
- Around line 85-86: README example lists actions/checkout@v6 and
actions/setup-go@v6 which are inconsistent with the actual workflow
(minimal.yml); update the README to use the same versions as the workflow by
changing the checkout and Go setup entries to match the workflow's versions
(e.g., actions/checkout@v4 and actions/setup-go@v5) so the README aligns with
minimal.yml and avoids confusion.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5a180f38-d49b-4a55-902c-75f8d4a1767f
📒 Files selected for processing (2)
.github/workflows/minimal.ymlREADME.md
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
README.md (1)
22-22: Normalize heading style to satisfy markdownlint (MD003).These headings are flagged as ATX while the document style expects setext. Consider normalizing to one style across the file.
As per coding guidelines, "
**/*.{js,ts,tsx,json,md}: Format code with Prettier usingnpm run formator check formatting withnpm run format:check."Also applies to: 41-41, 127-127, 159-159
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 22, The heading "## Supported languages & examples" (and the other ATX headings with the same pattern such as "Supported languages & examples" occurrences) should be converted from ATX-style (## ...) to the file's setext-style headings to satisfy markdownlint MD003; replace the "## Supported languages & examples" line with the setext equivalent by leaving the heading text on its own line and adding an underline of hyphens (for an H2) directly beneath it, and do the same for the other flagged headings (the occurrences at the other positions) so all headings in README.md use the same setext style.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 90-100: Update the README minimal workflow snippet to match the
live `.github/workflows/minimal.yml`: change the "Run benchmark" step command to
use the same working directory as the workflow (use `cd examples/go && go test
-bench 'BenchmarkFib' | tee output.txt` instead of plain `go test ...`) and
rename the "Compare results" step to "Compare benchmark result" so the step
names ("Run benchmark", "Download previous benchmark data", "Compare benchmark
result") and the benchmark command are identical to the actual `minimal.yml`.
- Line 35: The Luau table row is inconsistent: it shows "Supported: Yes" while
other cells say "Coming soon"; update the Luau row so the Supported column
reflects the planned state (e.g., change "Yes" to "Planned" or "No/Planned") to
match the "Coming soon" notes—locate the row containing the "Luau" label in the
README table and replace the Supported value accordingly so the row is
consistent.
---
Nitpick comments:
In `@README.md`:
- Line 22: The heading "## Supported languages & examples" (and the other ATX
headings with the same pattern such as "Supported languages & examples"
occurrences) should be converted from ATX-style (## ...) to the file's
setext-style headings to satisfy markdownlint MD003; replace the "## Supported
languages & examples" line with the setext equivalent by leaving the heading
text on its own line and adding an underline of hyphens (for an H2) directly
beneath it, and do the same for the other flagged headings (the occurrences at
the other positions) so all headings in README.md use the same setext style.
|
@ktrz @ningziwen could you review? also note that this PR will render a few other PRs obsolete (#287, #288, #290) Thanks! |
|
@miparnisari thank you very much for reviving this PR! Added a couple of comments. Could you also align readme example with the actual minimal example file? |
| # Workflow will fail when an alert happens | ||
| fail-on-alert: true |
There was a problem hiding this comment.
i'm not sure if this line makes sense here. this step isn't comparing benchmark results, so how would it fail?
| jobs: | ||
| benchmark: | ||
| name: Run minimal steps to run github-action-benchmark | ||
| name: Run benchmark and save results |
There was a problem hiding this comment.
i feel like there is a companion minimal example missing: where a PR runs benchmarks and compares them against the benchmarks saved in here...
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
.github/workflows/minimal.yml (1)
41-41:⚠️ Potential issue | 🟡 MinorAdd newline at end of file.
Static analysis reports missing newline character at end of file, which is a POSIX standard requirement and helps with tools like
catand version control diffs.Proposed fix
- key: "${{ github.sha }}-${{ runner.os }}-go-benchmark" + key: "${{ github.sha }}-${{ runner.os }}-go-benchmark" +🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/minimal.yml at line 41, The file is missing a trailing newline after the last line containing key: "${{ github.sha }}-${{ runner.os }}-go-benchmark"; open the workflow file and add a single newline character at the end of file (ensure the final line ends with a newline) so POSIX tools and static analysis no longer report an EOF newline warning.
🧹 Nitpick comments (1)
README.md (1)
159-160: Minor style improvement: simplify "In order to".Consider using "To" instead of "In order to" for conciseness.
Proposed fix
GitHub Actions [Job Summaries](https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/) are -also supported. In order to use Job Summaries, turn on the `summary-always` option. +also supported. To use Job Summaries, turn on the `summary-always` option.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 159 - 160, Replace the phrase "In order to use Job Summaries, turn on the `summary-always` option." with a more concise sentence by changing "In order to" to "To" so it reads "To use Job Summaries, turn on the `summary-always` option."; locate the sentence in README.md containing "In order to use Job Summaries" and update that exact phrase.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/minimal.yml:
- Around line 22-29: The workflow fails because the run step uses "cd
examples/go && go test -bench 'BenchmarkFib' | tee output.txt" which writes
examples/go/output.txt but the benchmark action is configured with
output-file-path: output.txt (repo root); update the action input to point to
the actual file produced (set output-file-path to examples/go/output.txt) or
alternatively change the run command to write the file to the repo root; ensure
you update the value referenced by the "output-file-path" key so the
benchmark-action/github-action-benchmark@v1 can find the output.
In `@README.md`:
- Line 74: In the README workflow example replace the misspelled key "nname:
Example for minimal setup" with the correct YAML key "name: Example for minimal
setup" so the workflow header uses "name" (not "nname")—edit the example line
that contains "nname" and update it to "name".
- Line 253: The anchor link currently pointing to "['Examples'
section](`#examples`)" is broken; update that link text/target to reference the
actual heading "Supported languages & examples" (e.g. change the markdown to
"['Supported languages & examples' section](`#supported-languages--examples`)" or
otherwise match the exact slug of that heading) so the link navigates correctly;
locate the offending link string "['Examples' section](`#examples`)" in README.md
and replace it with the corrected link that matches the heading slug.
- Around line 95-102: The README's benchmark example uses "run: cd examples/go
&& go test -bench 'BenchmarkFib' | tee output.txt" which writes output.txt into
examples/go/, but the action block still sets output-file-path: output.txt
(root); update the example to point to the correct path by changing
output-file-path to examples/go/output.txt so the benchmark-action (key:
output-file-path) reads the actual file produced by the run command.
---
Duplicate comments:
In @.github/workflows/minimal.yml:
- Line 41: The file is missing a trailing newline after the last line containing
key: "${{ github.sha }}-${{ runner.os }}-go-benchmark"; open the workflow file
and add a single newline character at the end of file (ensure the final line
ends with a newline) so POSIX tools and static analysis no longer report an EOF
newline warning.
---
Nitpick comments:
In `@README.md`:
- Around line 159-160: Replace the phrase "In order to use Job Summaries, turn
on the `summary-always` option." with a more concise sentence by changing "In
order to" to "To" so it reads "To use Job Summaries, turn on the
`summary-always` option."; locate the sentence in README.md containing "In order
to use Job Summaries" and update that exact phrase.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 812f6aa3-d70e-4828-8d3f-bd6164b423b5
📒 Files selected for processing (2)
.github/workflows/minimal.ymlREADME.md
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
README.md (2)
219-229: Minor inconsistency:auto-pushvalue style differs from minimal example.Line 229 uses
auto-push: true(YAML boolean) while the minimal setup example at line 108 usesauto-push: "true"(quoted string). While both work, consistent style across examples would be cleaner.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 219 - 229, The examples use two styles for the auto-push input; update the snippet that currently has auto-push: true to use the same style as the minimal setup (auto-push: "true") for consistency; locate the action usage block containing name: My Project Go Benchmark and change the auto-push value to a quoted string "true" so both examples use the same YAML style.
159-160: Consider more concise wording."In order to use" can be simplified to "To use" for conciseness.
Proposed fix
-GitHub Actions [Job Summaries](https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/) are -also supported. In order to use Job Summaries, turn on the `summary-always` option. +GitHub Actions [Job Summaries](https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/) are +also supported. To use Job Summaries, turn on the `summary-always` option.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 159 - 160, Update the sentence that reads "In order to use Job Summaries, turn on the `summary-always` option." to the more concise wording "To use Job Summaries, turn on the `summary-always` option." — replace the phrase "In order to use" with "To use" in the README text describing GitHub Actions Job Summaries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/minimal.yml:
- Line 41: Add a trailing newline to the end of the workflow file by ensuring
the last line (the line containing key: "${{ github.sha }}-${{ runner.os
}}-go-benchmark") is followed by a single newline character so the file ends
with a newline.
- Around line 36-41: The workflow saves benchmark-data.json but never restores
it before the benchmark run; add a cache restore step (using
actions/cache/restore@v5) before the benchmark action that uses the same path
"./cache/benchmark-data.json" and the same key expression "${{ github.sha }}-${{
runner.os }}-go-benchmark" so the previous benchmark-data.json is available for
comparison by the step that performs the benchmark and the "fail-on-alert"
check; ensure the restore step is named (e.g., "Restore JSON from cache") and
placed before the benchmark invocation so missing-cache cases are handled
gracefully.
In `@README.md`:
- Around line 74-115: Add a cache restore step before running the benchmark and
change the cache keys to allow restores across commits: insert a step named
"Restore benchmark JSON from cache" (matching the job name benchmark and placed
before the "Run benchmark and stores the output to a file" step) using
actions/cache/restore@v5 with path ./cache/benchmark-data.json, key: "${{
runner.os }}-go-benchmark" and restore-keys: |
${{ runner.os }}-go-benchmark; then update the existing "Save JSON in cache"
step (which currently uses key "${{ github.sha }}-${{ runner.os
}}-go-benchmark") to use the same prefix key pattern "${{ runner.os
}}-go-benchmark" so future runs can restore previous benchmark-data.json across
commits.
---
Nitpick comments:
In `@README.md`:
- Around line 219-229: The examples use two styles for the auto-push input;
update the snippet that currently has auto-push: true to use the same style as
the minimal setup (auto-push: "true") for consistency; locate the action usage
block containing name: My Project Go Benchmark and change the auto-push value to
a quoted string "true" so both examples use the same YAML style.
- Around line 159-160: Update the sentence that reads "In order to use Job
Summaries, turn on the `summary-always` option." to the more concise wording "To
use Job Summaries, turn on the `summary-always` option." — replace the phrase
"In order to use" with "To use" in the README text describing GitHub Actions Job
Summaries.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f925a28e-4f4e-4cc5-ba8e-ee91136dfb10
📒 Files selected for processing (2)
.github/workflows/minimal.ymlREADME.md
.github/workflows/minimal.yml
Outdated
| - name: Save JSON in cache | ||
| uses: actions/cache/save@v5 | ||
| with: | ||
| path: ./cache/benchmark-data.json | ||
| # Save with commit hash to avoid "cache already exists" | ||
| key: "${{ github.sha }}-${{ runner.os }}-go-benchmark" No newline at end of file |
There was a problem hiding this comment.
Missing cache restore step prevents comparison with previous benchmarks.
The workflow saves benchmark-data.json to cache but never restores it. Without a restore step before the benchmark action, each run starts with an empty/missing benchmark-data.json, so there's no historical data to compare against. The fail-on-alert setting becomes ineffective since there's nothing to compare.
For a functional minimal example, add a cache restore step:
Proposed fix
- name: Run benchmark and stores the output to a file
run: go test ./examples/go -bench 'BenchmarkFib' | tee output.txt
+ - name: Restore benchmark JSON from cache
+ uses: actions/cache/restore@v5
+ with:
+ path: ./cache/benchmark-data.json
+ key: "${{ runner.os }}-go-benchmark"
+ restore-keys: |
+ ${{ runner.os }}-go-benchmark
- name: Get JSON for benchmark
uses: benchmark-action/github-action-benchmark@v1🧰 Tools
🪛 GitHub Check: Run unit tests
[failure] 41-41:
41:65 [new-line-at-end-of-file] no new line character at the end of file
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/minimal.yml around lines 36 - 41, The workflow saves
benchmark-data.json but never restores it before the benchmark run; add a cache
restore step (using actions/cache/restore@v5) before the benchmark action that
uses the same path "./cache/benchmark-data.json" and the same key expression
"${{ github.sha }}-${{ runner.os }}-go-benchmark" so the previous
benchmark-data.json is available for comparison by the step that performs the
benchmark and the "fail-on-alert" check; ensure the restore step is named (e.g.,
"Restore JSON from cache") and placed before the benchmark invocation so
missing-cache cases are handled gracefully.
| name: Example for minimal setup | ||
| on: | ||
| # NOTE! Do NOT add any other "on", because this workflow has permission to write to the repo! | ||
| push: | ||
| branches: | ||
| - master | ||
|
|
||
| permissions: | ||
| # permission to update benchmark contents in gh-pages branch | ||
| contents: "write" | ||
|
|
||
| jobs: | ||
| benchmark: | ||
| name: Performance regression check | ||
| name: Run benchmark and save results | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| - uses: actions/setup-go@v4 | ||
| - uses: actions/checkout@v6 | ||
| - uses: actions/setup-go@v6 | ||
| with: | ||
| go-version: "stable" | ||
| # Run benchmark with `go test -bench` and stores the output to a file | ||
| - name: Run benchmark | ||
| run: go test -bench 'BenchmarkFib' | tee output.txt | ||
| # Download previous benchmark result from cache (if exists) | ||
| - name: Download previous benchmark data | ||
| uses: actions/cache@v4 | ||
| with: | ||
| path: ./cache | ||
| key: ${{ runner.os }}-benchmark | ||
| # Run `github-action-benchmark` action | ||
| - name: Store benchmark result | ||
| - name: Run benchmark and stores the output to a file | ||
| run: go test ./examples/go -bench 'BenchmarkFib' | tee output.txt | ||
| - name: Get JSON for benchmark | ||
| uses: benchmark-action/github-action-benchmark@v1 | ||
| with: | ||
| # What benchmark tool the output.txt came from | ||
| tool: 'go' | ||
| # Where the output from the benchmark tool is stored | ||
| output-file-path: output.txt | ||
| # Where the previous data file is stored | ||
| # Updates this file | ||
| external-data-json-path: ./cache/benchmark-data.json | ||
| # Workflow will fail when an alert happens | ||
| fail-on-alert: true | ||
| # Upload the updated cache file for the next job by actions/cache | ||
| # Writes to gh-pages-branch | ||
| auto-push: "true" | ||
| - name: Save JSON in cache | ||
| uses: actions/cache/save@v5 | ||
| with: | ||
| path: ./cache/benchmark-data.json | ||
| # Save with commit hash to avoid "cache already exists" | ||
| key: "${{ github.sha }}-${{ runner.os }}-go-benchmark" | ||
| ``` |
There was a problem hiding this comment.
README example matches workflow but shares the missing cache restore issue.
The README minimal setup example is now synchronized with .github/workflows/minimal.yml. However, both are missing the cache restore step, which means benchmarks won't compare against previous runs.
Additionally, the cache key includes ${{ github.sha }}, making exact-match restores impossible across commits. If a restore step is added, it should use restore-keys with a prefix pattern like ${{ runner.os }}-go-benchmark.
Proposed addition to restore cache before benchmark step
- name: Restore benchmark JSON from cache
uses: actions/cache/restore@v5
with:
path: ./cache/benchmark-data.json
key: "${{ runner.os }}-go-benchmark"
restore-keys: |
${{ runner.os }}-go-benchmark🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 74 - 115, Add a cache restore step before running the
benchmark and change the cache keys to allow restores across commits: insert a
step named "Restore benchmark JSON from cache" (matching the job name benchmark
and placed before the "Run benchmark and stores the output to a file" step)
using actions/cache/restore@v5 with path ./cache/benchmark-data.json, key: "${{
runner.os }}-go-benchmark" and restore-keys: |
${{ runner.os }}-go-benchmark; then update the existing "Save JSON in cache"
step (which currently uses key "${{ github.sha }}-${{ runner.os
}}-go-benchmark") to use the same prefix key pattern "${{ runner.os
}}-go-benchmark" so future runs can restore previous benchmark-data.json across
commits.
Changes:
Summary by CodeRabbit
Documentation
Chores