fix: remove global add_function_to_prompt — breaks native tool calling (Groq, OpenAI)#4985
Open
vitas wants to merge 2 commits intogoogle:mainfrom
Open
fix: remove global add_function_to_prompt — breaks native tool calling (Groq, OpenAI)#4985vitas wants to merge 2 commits intogoogle:mainfrom
vitas wants to merge 2 commits intogoogle:mainfrom
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Collaborator
|
Response from ADK Triaging Agent Hello @vitas, thank you for creating this PR! Before we can merge this, could you please:
This information will help reviewers to review your PR more efficiently. Thanks! |
e385ebd to
dfb6eba
Compare
Setting `litellm.add_function_to_prompt = True` globally forces ALL
models through text-based tool calling, even models that support
native function calling (Groq, OpenAI, Anthropic).
When this flag is set, LiteLLM injects tool definitions into the
system prompt as text. Models then output XML-style function tags
(`<function=name {...} </function>`) instead of proper `tool_calls`
JSON. Providers like Groq reject this with `tool_use_failed`.
Proof: Direct `litellm.completion()` without this flag returns proper
`tool_calls` JSON with `finish_reason: "tool_calls"`. With the flag,
the same model fails.
The fix removes the global default. Models that need text-based tool
calling can opt in per-instance:
LiteLlm(model="ollama/qwen2", add_function_to_prompt=True)
Models with native tool calling work without any flag:
LiteLlm(model="groq/llama-3.3-70b-versatile")
Fixes: kagent-dev/kagent#1532
Related: huggingface/smolagents#1119, BerriAI/litellm#11001
dfb6eba to
b04d192
Compare
vitas
added a commit
to vitas/evidra-kagent-bench
that referenced
this pull request
Mar 24, 2026
Install google-adk from vitas/adk-python@fix/groq-tool-calling which removes the global add_function_to_prompt=True that broke native tool calling for Groq/OpenAI/Anthropic. Verified: first LLM call now uses proper tool_calls JSON (9563 tokens used). Second call hits Groq free tier rate limit (12K TPM) but the tool calling format is correct. PR: google/adk-python#4985 Signed-off-by: Vitas <vitas@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
_ensure_litellm_imported()setslitellm.add_function_to_prompt = Trueglobally at import time (line 188). This forces ALL models through LiteLLM's text-based tool calling path — tool definitions are injected into the system prompt as text instead of being passed as thetoolsparameter.Models that support native function calling (Groq, OpenAI, Anthropic) then output XML-style function tags instead of proper
tool_callsJSON:Groq rejects with:
{"error":{"message":"Failed to call a function. See 'failed_generation'","code":"tool_use_failed"}}Proof
Direct
litellm.completion()inside the same environment without the global flag returns propertool_callsJSON withfinish_reason: "tool_calls":Fix
Remove the global
litellm.add_function_to_prompt = True. Models that need text-based tool calling (e.g., some Ollama models without native support) can opt in per-instance:This kwarg flows through
_additional_argsintoacompletion(), so per-model opt-in already works.Impact
LiteLlmadd_function_to_prompt=TrueexplicitlyReferences