Skip to content

Commit c7c248c

Browse files
Add agent tools and final response tool examples to documentation
1 parent 1515127 commit c7c248c

1 file changed

Lines changed: 53 additions & 0 deletions

File tree

docs/examples/index.rst

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,59 @@ Let's start with something fun and straightforward—creating a simple dialogue
9090
9191
Individual agents can be served and exposed as a OpenAI compatible API endpoint with the :meth:`~sdialog.agents.Agent.serve` method (e.g. ``mentor.serve(port=1333)``), see :ref:`here <serving_agents>` for more details.
9292

93+
.. _ex-agent-tools:
94+
95+
Agent Tools (Function Calling)
96+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
97+
You can attach plain Python functions as tools. When the backend supports tool/function calling,
98+
the agent can call them during response generation.
99+
100+
.. code-block:: python
101+
102+
import sdialog
103+
from sdialog.agents import Agent
104+
105+
sdialog.config.llm("openai:gpt-4.1")
106+
107+
def get_weather(city: str) -> dict:
108+
"""Return weather information for a city."""
109+
return {"city": city, "temperature_c": 21, "condition": "sunny"}
110+
111+
assistant = Agent(
112+
name="WeatherAssistant",
113+
tools=[get_weather],
114+
system_prompt="Use tools when needed and answer concisely."
115+
)
116+
117+
print(assistant("What's the weather in Geneva?"))
118+
119+
.. _ex-final-response-tool:
120+
121+
Direct Tool Output with ``@final_response_tool``
122+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
123+
If a tool returns a pre-formatted result (e.g., a markdown table), you can mark it with
124+
``@final_response_tool`` so the agent returns the tool output directly as the final response.
125+
126+
This is especially useful when the tool already produces exactly the text you want the user to see.
127+
Without the decorator, the LLM would typically read the tool output and generate a new answer from it,
128+
which may add extra wording, reformat the content, or spend unnecessary tokens reproducing a large block
129+
of structured text. With ``@final_response_tool``, the tool output becomes the final answer directly.
130+
131+
.. code-block:: python
132+
133+
from sdialog.agents import Agent, final_response_tool
134+
135+
@final_response_tool
136+
def get_report_table(topic: str) -> str:
137+
return "| Item | Value |\n|---|---|\n| example | 42 |"
138+
139+
agent = Agent(tools=[get_report_table])
140+
141+
Notes:
142+
143+
- Non-empty tool output is returned directly as the agent final answer.
144+
- Empty tool output falls back to regular tool flow (the LLM can continue and synthesize a response).
145+
93146
Few-Shot Learning with Example Dialogs
94147
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
95148
Now let's explore one of SDialog's most powerful features! We can guide our dialogues by providing examples that show the system what style, structure, or format we want. This technique, called few-shot learning, works by supplying ``example_dialogs`` to generation components. These exemplar dialogs are injected into the system prompt to steer tone, task format, and conversation flow.

0 commit comments

Comments
 (0)