Skip to content

Commit 71efe48

Browse files
rgambeegithub-actions[bot]
authored andcommitted
Instruct CC agent to prefer objective metrics when ranking (#4874)
Lately, the CC agent has been prone to ranking by subjective 0-100 scores, even when there's an objective metric. Two examples are 1. ranking AI startups by revenue growth 2. ranking AI analysts by AGI timelines Instead of using revenue growth rate or predicted year for reaching AGI, CC would produce an arbitrary score column ranging from 0 to 100. Now both examples give me the expected output field. I remember fixing this for Autocohort a few months ago. There might be other instructions from the Autocohort prompts that are worth copying into the system prompt or skill for CC. But I'll leave that for a future PR. Sourced from commit ee9f0c910c0be47b50605fb716eb12a107edac4a
1 parent bca7d8f commit 71efe48

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

everyrow-mcp/src/everyrow_mcp/tools.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -448,9 +448,9 @@ async def everyrow_rank(params: RankInput, ctx: EveryRowContext) -> list[TextCon
448448
criteria are qualitative.
449449
450450
Examples:
451-
- "Score this lead from 0 to 10 by likelihood to need data integration solutions"
452-
- "Score this company out of 100 by AI/ML adoption maturity"
453-
- "Score this candidate by fit for a senior engineering role, with 100 being the best"
451+
- "Estimate this drug's peak annual sales in billions of dollars"
452+
- "What is this country's 5-year GDP growth rate as a percentage?"
453+
- "Score this candidate from 0 to 100 by fit for a senior engineering role"
454454
455455
This function submits the task and returns immediately with a task_id and session_url.
456456

0 commit comments

Comments
 (0)