Skip to content

Adapt search performance trace documentation#3555

Open
ManyTheFish wants to merge 4 commits intomainfrom
adapt-search-performance-trace-documentation
Open

Adapt search performance trace documentation#3555
ManyTheFish wants to merge 4 commits intomainfrom
adapt-search-performance-trace-documentation

Conversation

@ManyTheFish
Copy link
Copy Markdown
Member

@ManyTheFish ManyTheFish commented Apr 14, 2026

Some of the search performance trace steps have been renamed in the engine.

related to: meilisearch/meilisearch#6323

Checklist

For internal Meilisearch team member only:

Summary by CodeRabbit

Release Notes

  • Documentation
    • Updated search performance debugging documentation with clearer stage naming conventions and improved guidance for identifying performance bottlenecks.

Some of the search performance trace steps have been renamed in the engine.

related to: meilisearch/meilisearch#6323
@mintlify
Copy link
Copy Markdown

mintlify bot commented Apr 14, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
meilisearch-documentation 🟢 Ready View Preview Apr 14, 2026, 12:34 PM

💡 Tip: Enable Workflows to automatically generate PRs for you.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 14, 2026

Warning

Rate limit exceeded

@ManyTheFish has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 52 minutes and 27 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 52 minutes and 27 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 15326ad0-7b6f-486b-b281-52f4a1e63e7e

📥 Commits

Reviewing files that changed from the base of the PR and between eb0c090 and cebbfc7.

📒 Files selected for processing (1)
  • capabilities/full_text_search/advanced/debug_search_performance.mdx
📝 Walkthrough

Walkthrough

Updated documentation for performance debugging by renaming and reorganizing stage names in performanceDetails. Changes include replacing "wait for permit" with "wait in queue," renaming search operation stages (e.g., tokenize to tokenize query, keyword search to keyword ranking), and consolidating filter/resolve stages under "evaluate" terminology.

Changes

Cohort / File(s) Summary
Performance Debug Documentation
capabilities/full_text_search/advanced/debug_search_performance.mdx
Updated stage name mappings and explanatory text, replacing metric labels with new hierarchical naming convention (e.g., wait in queue, evaluate query, keyword ranking), and adjusted bottleneck guidance examples to reference updated stage identifiers.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

Poem

🐰 ✨
Names have changed in search's dance,
Stages shuffled, new expanse,
Queues renamed and ranking bright,
Metrics mapped with fresh insight!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Adapt search performance trace documentation' directly and clearly summarizes the main change: updating documentation for renamed search performance trace steps in the engine.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch adapt-search-performance-trace-documentation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@capabilities/full_text_search/advanced/debug_search_performance.mdx`:
- Line 64: Update the sentence describing stage naming in the performanceDetails
section to use plain language: replace the abbreviation "e.g.," with "for
example" so it reads like "Stage names are hierarchical, using `>` as a
separator (for example `search > keyword ranking`). Locate the text that
references performanceDetails and the phrase "Stage names are hierarchical,
using `>` as a separator (e.g., `search > keyword ranking`)" and change only the
abbreviation to "for example" to match documentation style.
- Around line 188-189: The two dense bullets "High `search > evaluate query`"
and "High `search > keyword ranking`" should be rewritten into shorter, clearer
sentences: for `search > evaluate query` split the run-on into two sentences
that say a complex query with many words or synonyms builds an expensive query
tree and suggest simplifying the query or reducing synonyms; for `search >
keyword ranking` split into two sentences that state many ranking iterations
occur when offset/limit or result size is large and suggest reducing
offset/limit, constraining [searchable attributes], or lowering `maxTotalHits`;
update the exact bullet texts so each action recommendation is its own short
sentence and remove extra commas to improve scanability.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f01d27a0-0fd3-48b1-a836-070db9a2579f

📥 Commits

Reviewing files that changed from the base of the PR and between 04e9e1f and eb0c090.

📒 Files selected for processing (1)
  • capabilities/full_text_search/advanced/debug_search_performance.mdx

## Understanding performance stages

Each key in `performanceDetails` represents a stage of the search pipeline. Stage names are hierarchical, using `>` as a separator (e.g., `search > keyword search`).
Each key in `performanceDetails` represents a stage of the search pipeline. Stage names are hierarchical, using `>` as a separator (e.g., `search > keyword ranking`).
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace e.g., with for example on Line 64

Please switch to the plain-language form to match docs style.

Suggested edit
-Each key in `performanceDetails` represents a stage of the search pipeline. Stage names are hierarchical, using `>` as a separator (e.g., `search > keyword ranking`).
+Each key in `performanceDetails` represents a stage of the search pipeline. Stage names are hierarchical, using `>` as a separator (for example, `search > keyword ranking`).

As per coding guidelines, "Prefer 'for example' and 'that is' over 'e.g.' and 'i.e.' so the text stays accessible."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Each key in `performanceDetails` represents a stage of the search pipeline. Stage names are hierarchical, using `>` as a separator (e.g., `search > keyword ranking`).
Each key in `performanceDetails` represents a stage of the search pipeline. Stage names are hierarchical, using `>` as a separator (for example, `search > keyword ranking`).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@capabilities/full_text_search/advanced/debug_search_performance.mdx` at line
64, Update the sentence describing stage naming in the performanceDetails
section to use plain language: replace the abbreviation "e.g.," with "for
example" so it reads like "Stage names are hierarchical, using `>` as a
separator (for example `search > keyword ranking`). Locate the text that
references performanceDetails and the phrase "Stage names are hierarchical,
using `>` as a separator (e.g., `search > keyword ranking`)" and change only the
abbreviation to "for example" to match documentation style.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant