Skip to content

fix: remove global add_function_to_prompt — breaks native tool calling (Groq, OpenAI)#4985

Open
vitas wants to merge 2 commits intogoogle:mainfrom
vitas:fix/groq-tool-calling
Open

fix: remove global add_function_to_prompt — breaks native tool calling (Groq, OpenAI)#4985
vitas wants to merge 2 commits intogoogle:mainfrom
vitas:fix/groq-tool-calling

Conversation

@vitas
Copy link
Copy Markdown

@vitas vitas commented Mar 24, 2026

Problem

_ensure_litellm_imported() sets litellm.add_function_to_prompt = True globally at import time (line 188). This forces ALL models through LiteLLM's text-based tool calling path — tool definitions are injected into the system prompt as text instead of being passed as the tools parameter.

Models that support native function calling (Groq, OpenAI, Anthropic) then output XML-style function tags instead of proper tool_calls JSON:

<function=run_command {"command": "kubectl get pods -n demo"} </function>

Groq rejects with:

{"error":{"message":"Failed to call a function. See 'failed_generation'","code":"tool_use_failed"}}

Proof

Direct litellm.completion() inside the same environment without the global flag returns proper tool_calls JSON with finish_reason: "tool_calls":

# Works (direct LiteLLM, no global flag):
litellm.completion(model="groq/llama-3.3-70b-versatile", tools=[...])
# → finish_reason: "tool_calls", proper JSON ✓

# Fails (via ADK, which sets the global flag):
Agent(model=LiteLlm(model="groq/llama-3.3-70b-versatile"), tools=[...])
# → XML tags, Groq 400 ✗

Fix

Remove the global litellm.add_function_to_prompt = True. Models that need text-based tool calling (e.g., some Ollama models without native support) can opt in per-instance:

LiteLlm(model="ollama/qwen2", add_function_to_prompt=True)

This kwarg flows through _additional_args into acompletion(), so per-model opt-in already works.

Impact

  • Fixes: All models with native tool calling (Groq, OpenAI, Anthropic) through LiteLlm
  • No regression for models without native tool calling: They can pass add_function_to_prompt=True explicitly
  • One line removed, comment added explaining why

References

@google-cla
Copy link
Copy Markdown

google-cla bot commented Mar 24, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@adk-bot adk-bot added the models [Component] Issues related to model support label Mar 24, 2026
@adk-bot
Copy link
Copy Markdown
Collaborator

adk-bot commented Mar 24, 2026

Response from ADK Triaging Agent

Hello @vitas, thank you for creating this PR!

Before we can merge this, could you please:

  1. Sign the Contributor License Agreement (CLA). You can find more information at https://cla.developers.google.com/.
  2. Associate a GitHub issue with this PR. If there is no existing issue, could you please create one?
  3. Add a testing plan section to your PR description to describe how you tested these changes.

This information will help reviewers to review your PR more efficiently. Thanks!

@vitas vitas force-pushed the fix/groq-tool-calling branch 2 times, most recently from e385ebd to dfb6eba Compare March 24, 2026 18:48
Setting `litellm.add_function_to_prompt = True` globally forces ALL
models through text-based tool calling, even models that support
native function calling (Groq, OpenAI, Anthropic).

When this flag is set, LiteLLM injects tool definitions into the
system prompt as text. Models then output XML-style function tags
(`<function=name {...} </function>`) instead of proper `tool_calls`
JSON. Providers like Groq reject this with `tool_use_failed`.

Proof: Direct `litellm.completion()` without this flag returns proper
`tool_calls` JSON with `finish_reason: "tool_calls"`. With the flag,
the same model fails.

The fix removes the global default. Models that need text-based tool
calling can opt in per-instance:

    LiteLlm(model="ollama/qwen2", add_function_to_prompt=True)

Models with native tool calling work without any flag:

    LiteLlm(model="groq/llama-3.3-70b-versatile")

Fixes: kagent-dev/kagent#1532
Related: huggingface/smolagents#1119, BerriAI/litellm#11001
@vitas vitas force-pushed the fix/groq-tool-calling branch from dfb6eba to b04d192 Compare March 24, 2026 18:51
vitas added a commit to vitas/evidra-kagent-bench that referenced this pull request Mar 24, 2026
Install google-adk from vitas/adk-python@fix/groq-tool-calling
which removes the global add_function_to_prompt=True that broke
native tool calling for Groq/OpenAI/Anthropic.

Verified: first LLM call now uses proper tool_calls JSON (9563
tokens used). Second call hits Groq free tier rate limit (12K TPM)
but the tool calling format is correct.

PR: google/adk-python#4985

Signed-off-by: Vitas <vitas@users.noreply.github.com>
@rohityan rohityan self-assigned this Mar 26, 2026
@rohityan rohityan added needs review [Status] The PR/issue is awaiting review from the maintainer and removed needs review [Status] The PR/issue is awaiting review from the maintainer labels Mar 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants