Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 56 minutes and 32 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughA new GitHub Actions workflow file added that automatically deploys Kubernetes deployments following Build workflow completion or manual trigger. The workflow authenticates with AWS via OIDC, retrieves EKS kubeconfig, and updates container images using kubectl commands. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/deploy.yml:
- Around line 34-36: The deployment step "Update Kubernetes Docs deployment"
currently runs only `kubectl set image` which returns immediately; modify this
step to run `kubectl set image` followed by `kubectl rollout status
deployment/${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} --namespace=${{
secrets.AWS_EKS_NAMESPACE }} --timeout=3m` (or other appropriate timeout) so the
workflow waits for the Deployment rollout to complete and fails the job if pods
don’t become ready; ensure you reference the same deployment name and namespace
used in the existing `kubectl set image` invocation.
- Around line 24-27: Remove the static AWS credentials by deleting the
aws-access-key-id and aws-secret-access-key entries and rely solely on OIDC role
assumption (role-to-assume and role-session-name) with id-token write enabled;
ensure the workflow only provides role-to-assume and role-session-name (and any
id-token settings) so authentication uses temporary OIDC tokens rather than
long-lived static keys.
- Around line 4-8: The workflow_run trigger is unfiltered and will run on all
branches; update the workflow_run block (the one containing workflow_run,
workflows: ["Build"], and types: - completed) to include a branches filter such
as branches: ["main"] so only completed Build runs on main can trigger this
deploy workflow.
- Line 36: The kubectl image update uses `${{ github.sha }}` which is unset in a
workflow_run trigger; update the image tag expression in the kubectl set image
command to use `${{ github.event.workflow_run.head_sha }}` so the deployment
(referenced by `${{ secrets.AWS_EKS_DEPLOYMENT_NAME }}`) points to the correct
commit SHA from the triggering workflow run; change the image tag in the
`kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }}
main=ghcr.io/txpipe/docs:${{ github.sha }} --namespace=${{
secrets.AWS_EKS_NAMESPACE }}` command to use `${{
github.event.workflow_run.head_sha }}` instead of `${{ github.sha }}`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8bf3305c-f34d-48e1-9a47-e4a1ab6c5cdc
📒 Files selected for processing (1)
.github/workflows/deploy.yml
| workflow_run: | ||
| workflows: ["Build"] | ||
| types: | ||
| - completed | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For GitHub Actions workflow_run, does the trigger include runs from all branches unless branches is explicitly set?
💡 Result:
Yes, the GitHub Actions workflow_run trigger includes runs from all branches unless the branches or branches-ignore filter is explicitly set. Official documentation shows basic examples without branch filters, such as: on: workflow_run: workflows: [Build] types: [completed] This configuration triggers on completions of the "Build" workflow regardless of branch. When limiting is needed, docs explicitly state: "You can use the branches or branches-ignore filter to specify what branches the triggering workflow must run on in order to trigger your workflow." The filter is optional for restriction; without it, all branches trigger the workflow.
Citations:
- 1: https://docs.github.com/actions/using-workflows/events-that-trigger-workflows
- 2: https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows
- 3: https://docs.github.com/actions/reference/workflows-and-actions/events-that-trigger-workflows
- 4: https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions
- 5: https://docs.github.com/actions/reference/events-that-trigger-workflows
- 6: https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows
- 7: https://docs.github.com/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows
Restrict workflow_run deploys to main to avoid unintended environment updates.
Without an explicit branches filter, workflow_run triggers on all branches. This means feature/PR branch artifacts can deploy to production if their Build succeeds.
Suggested fix
on:
workflow_run:
workflows: ["Build"]
types:
- completed
+ branches:
+ - main
@@
jobs:
deploy:
- if: ${{ github.event_name == 'workflow_dispatch' || github.event.workflow_run.conclusion == 'success' }}
+ if: ${{ github.event_name == 'workflow_dispatch' || (github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.head_branch == 'main') }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy.yml around lines 4 - 8, The workflow_run trigger is
unfiltered and will run on all branches; update the workflow_run block (the one
containing workflow_run, workflows: ["Build"], and types: - completed) to
include a branches filter such as branches: ["main"] so only completed Build
runs on main can trigger this deploy workflow.
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | ||
| role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }} | ||
| role-session-name: ${{ secrets.AWS_ROLE_SESSION_NAME }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For aws-actions/configure-aws-credentials@v4, if role-to-assume is set, what is the recommended auth mode when OIDC (id-token: write) is enabled? Is providing static access keys still advised?
💡 Result:
When using aws-actions/configure-aws-credentials@v4 with role-to-assume set and OIDC enabled via permissions: id-token: write in the workflow, the recommended authentication mode is GitHub OIDC. This uses core.getIDToken to obtain a JWT token for assuming the role via sts:AssumeRoleWithWebIdentity, providing short-lived temporary credentials without needing static access keys. Providing static access keys (aws-access-key-id and aws-secret-access-key) is not advised in this setup. The official documentation marks GitHub OIDC as the "[✅ Recommended]" method, explicitly states "Use temporary credentials when possible. OIDC is recommended because it provides temporary credentials," and lists static credentials as an alternative for other scenarios. The action prioritizes OIDC when id-token: write is permitted and role-to-assume is provided, without requiring static keys. Static keys are only for cases like assuming roles without OIDC (e.g., cross-account with existing creds).
Citations:
- 1: https://github.com/aws-actions/configure-aws-credentials
- 2: https://github.com/aws-actions/configure-aws-credentials/tree/v5.0.0
- 3: https://github.com/aws-actions/configure-aws-credentials/tree/v6.0.0
- 4: https://github.com/aws-actions/configure-aws-credentials/blob/v4/action.yml
- 5: https://github.com/aws-actions/configure-aws-credentials/blob/main/CHANGELOG.md
- 6: https://github.com/aws-actions/configure-aws-credentials/tree/v4
Remove static AWS keys when using OIDC role assumption.
Lines 24–25 provide static credentials (aws-access-key-id and aws-secret-access-key) while line 26 configures role-to-assume. With id-token: write enabled, AWS recommends OIDC-only authentication. Static keys in this setup are unnecessary and weaken security posture by introducing long-lived credentials that contradict temporary OIDC credentials.
Suggested fix
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-region: us-west-2
- aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
- aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
role-session-name: ${{ secrets.AWS_ROLE_SESSION_NAME }}
role-duration-seconds: 1200📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | |
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | |
| role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }} | |
| role-session-name: ${{ secrets.AWS_ROLE_SESSION_NAME }} | |
| role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }} | |
| role-session-name: ${{ secrets.AWS_ROLE_SESSION_NAME }} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy.yml around lines 24 - 27, Remove the static AWS
credentials by deleting the aws-access-key-id and aws-secret-access-key entries
and rely solely on OIDC role assumption (role-to-assume and role-session-name)
with id-token write enabled; ensure the workflow only provides role-to-assume
and role-session-name (and any id-token settings) so authentication uses
temporary OIDC tokens rather than long-lived static keys.
| - name: Update Kubernetes Docs deployment | ||
| run: | | ||
| kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${{ github.sha }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does kubectl set imagewait for rollout completion by default, or iskubectl rollout status required to fail CI on unsuccessful deployment?
💡 Result:
kubectl set image does not wait for rollout completion by default. It updates the image in the Deployment spec, triggering a rolling update by the Deployment controller, but the command returns immediately after patching the API server. To wait for rollout completion and fail on unsuccessful deployment (e.g., in CI), kubectl rollout status is required, as it watches until all new replicas are updated, available (passing readiness probes), and old replicas are scaled down, exiting with non-zero code on timeout or failure.
Citations:
- 1: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_set/kubectl_set_image/
- 2: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status/
- 3: https://thelinuxcode.com/kubectl-set-image-command/
- 4: https://www.jeffgeerling.com/blog/2018/updating-kubernetes-deployment-and-waiting-it-roll-out-shell-script/
- 5: https://github.com/kubernetes/kubectl/blob/master/pkg/cmd/set/set_image.go
- 6: https://stackoverflow.com/questions/40366192/kubernetes-how-to-make-deployment-to-update-image
- 7: https://stackoverflow.com/questions/36920171/how-can-i-edit-a-deployment-without-modify-the-file-manually
- 8: https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/
- 9: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_status
- 10: https://stackoverflow.com/questions/68540187/kubectl-rollout-status-when-the-command-complete
Add kubectl rollout status to wait for deployment completion before marking the workflow successful.
kubectl set image returns immediately after updating the Deployment spec and does not wait for rollout completion. Without explicit rollout status verification, the job can pass even if pods fail to become ready, creating false-green deployments.
Suggested fix
- name: Update Kubernetes Docs deployment
run: |
kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${{ github.sha }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }}
+ kubectl rollout status deployment/${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }} --timeout=300s📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Update Kubernetes Docs deployment | |
| run: | | |
| kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${{ github.sha }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }} | |
| - name: Update Kubernetes Docs deployment | |
| run: | | |
| kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${{ github.sha }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }} | |
| kubectl rollout status deployment/${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }} --timeout=300s |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy.yml around lines 34 - 36, The deployment step
"Update Kubernetes Docs deployment" currently runs only `kubectl set image`
which returns immediately; modify this step to run `kubectl set image` followed
by `kubectl rollout status deployment/${{ secrets.AWS_EKS_DEPLOYMENT_NAME }}
--namespace=${{ secrets.AWS_EKS_NAMESPACE }} --timeout=3m` (or other appropriate
timeout) so the workflow waits for the Deployment rollout to complete and fails
the job if pods don’t become ready; ensure you reference the same deployment
name and namespace used in the existing `kubectl set image` invocation.
|
|
||
| - name: Update Kubernetes Docs deployment | ||
| run: | | ||
| kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${{ github.sha }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In GitHub Actions, what value does github.shahave for aworkflow_runevent, and shouldgithub.event.workflow_run.head_sha be used to target the triggering run commit?
💡 Result:
In GitHub Actions for a workflow_run event, github.sha (which corresponds to the GITHUB_SHA environment variable) is empty or unset, as indicated by the official documentation table in "Events that trigger workflows" where the GITHUB_SHA column for workflow_run is blank (unlike events like push or pull_request which have specific SHA values). This means the commit SHA that triggered the current workflow run is not available in github.sha. To target the commit SHA of the triggering (child) workflow run, use github.event.workflow_run.head_sha. This is confirmed by community usage (e.g., Stack Overflow examples) and aligns with the workflow_run webhook payload structure, where event.workflow_run.head_sha provides the head commit SHA of the completed workflow run that triggered the current one. Official docs link to the webhook payload, and the REST API for workflow runs shows head_sha as the relevant field.
Citations:
- 1: https://docs.github.com/en/actions/reference/contexts-reference
- 2: https://docs.github.com/en/actions/learn-github-actions/contexts
- 3: https://docs.github.com/actions/using-workflows/events-that-trigger-workflows
- 4: https://stackoverflow.com/questions/61393981/getting-base-branch-sha-on-pull-request-in-github-action-workflow
- 5: https://docs.github.com/en/actions/reference/events-that-trigger-workflows
- 6: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows
- 7: https://docs.github.com/en/rest/actions/workflow-runs?apiVersion=2026-03-10
🏁 Script executed:
# Check deploy.yml structure and trigger
cat .github/workflows/deploy.ymlRepository: txpipe/docs
Length of output: 2069
🏁 Script executed:
# Check build.yml for reference
cat .github/workflows/build.ymlRepository: txpipe/docs
Length of output: 925
Use github.event.workflow_run.head_sha instead of github.sha for image tag in workflow_run context.
Line 36 uses ${{ github.sha }}, which is empty/unset for workflow_run events per GitHub Actions documentation. This causes the deployment to reference an invalid or non-existent image tag. Since this workflow is triggered when the Build workflow completes (via workflow_run), you must use ${{ github.event.workflow_run.head_sha }} to get the correct commit SHA from the triggering workflow run.
Suggested fix
jobs:
deploy:
+ env:
+ IMAGE_TAG: ${{ github.event_name == 'workflow_run' && github.event.workflow_run.head_sha || github.sha }}
@@
steps:
- name: Update Kubernetes Docs deployment
run: |
- kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${{ github.sha }} --namespace=${{ secrets.AWS_EKS_NAMESPACE }}
+ kubectl set image ${{ secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${IMAGE_TAG} --namespace=${{ secrets.AWS_EKS_NAMESPACE }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy.yml at line 36, The kubectl image update uses `${{
github.sha }}` which is unset in a workflow_run trigger; update the image tag
expression in the kubectl set image command to use `${{
github.event.workflow_run.head_sha }}` so the deployment (referenced by `${{
secrets.AWS_EKS_DEPLOYMENT_NAME }}`) points to the correct commit SHA from the
triggering workflow run; change the image tag in the `kubectl set image ${{
secrets.AWS_EKS_DEPLOYMENT_NAME }} main=ghcr.io/txpipe/docs:${{ github.sha }}
--namespace=${{ secrets.AWS_EKS_NAMESPACE }}` command to use `${{
github.event.workflow_run.head_sha }}` instead of `${{ github.sha }}`.
Summary by CodeRabbit