Skip to content

Commit b04d192

Browse files
committed
fix: remove global add_function_to_prompt — breaks native tool calling
Setting `litellm.add_function_to_prompt = True` globally forces ALL models through text-based tool calling, even models that support native function calling (Groq, OpenAI, Anthropic). When this flag is set, LiteLLM injects tool definitions into the system prompt as text. Models then output XML-style function tags (`<function=name {...} </function>`) instead of proper `tool_calls` JSON. Providers like Groq reject this with `tool_use_failed`. Proof: Direct `litellm.completion()` without this flag returns proper `tool_calls` JSON with `finish_reason: "tool_calls"`. With the flag, the same model fails. The fix removes the global default. Models that need text-based tool calling can opt in per-instance: LiteLlm(model="ollama/qwen2", add_function_to_prompt=True) Models with native tool calling work without any flag: LiteLlm(model="groq/llama-3.3-70b-versatile") Fixes: kagent-dev/kagent#1532 Related: huggingface/smolagents#1119, BerriAI/litellm#11001
1 parent 30b904e commit b04d192

1 file changed

Lines changed: 6 additions & 1 deletion

File tree

src/google/adk/models/lite_llm.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,12 @@ def _ensure_litellm_imported() -> None:
185185

186186
import litellm as litellm_module
187187

188-
litellm_module.add_function_to_prompt = True
188+
# Do NOT set litellm.add_function_to_prompt = True globally.
189+
# That flag injects tool definitions into the system prompt as text,
190+
# which breaks models that support native tool calling (Groq, OpenAI,
191+
# Anthropic). Those models then output XML-style function tags instead
192+
# of proper tool_calls JSON, causing "tool_use_failed" errors.
193+
# See: https://github.com/google/adk-python/issues/XXXX
189194

190195
globals()["litellm"] = litellm_module
191196
for symbol in _LITELLM_GLOBAL_SYMBOLS:

0 commit comments

Comments
 (0)