You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/examples/index.rst
+53Lines changed: 53 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,6 +90,59 @@ Let's start with something fun and straightforward—creating a simple dialogue
90
90
91
91
Individual agents can be served and exposed as a OpenAI compatible API endpoint with the :meth:`~sdialog.agents.Agent.serve` method (e.g. ``mentor.serve(port=1333)``), see :ref:`here <serving_agents>` for more details.
92
92
93
+
.. _ex-agent-tools:
94
+
95
+
Agent Tools (Function Calling)
96
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
97
+
You can attach plain Python functions as tools. When the backend supports tool/function calling,
98
+
the agent can call them during response generation.
99
+
100
+
.. code-block:: python
101
+
102
+
import sdialog
103
+
from sdialog.agents import Agent
104
+
105
+
sdialog.config.llm("openai:gpt-4.1")
106
+
107
+
defget_weather(city: str) -> dict:
108
+
"""Return weather information for a city."""
109
+
return {"city": city, "temperature_c": 21, "condition": "sunny"}
110
+
111
+
assistant = Agent(
112
+
name="WeatherAssistant",
113
+
tools=[get_weather],
114
+
system_prompt="Use tools when needed and answer concisely."
115
+
)
116
+
117
+
print(assistant("What's the weather in Geneva?"))
118
+
119
+
.. _ex-final-response-tool:
120
+
121
+
Direct Tool Output with ``@final_response_tool``
122
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
123
+
If a tool returns a pre-formatted result (e.g., a markdown table), you can mark it with
124
+
``@final_response_tool`` so the agent returns the tool output directly as the final response.
125
+
126
+
This is especially useful when the tool already produces exactly the text you want the user to see.
127
+
Without the decorator, the LLM would typically read the tool output and generate a new answer from it,
128
+
which may add extra wording, reformat the content, or spend unnecessary tokens reproducing a large block
129
+
of structured text. With ``@final_response_tool``, the tool output becomes the final answer directly.
130
+
131
+
.. code-block:: python
132
+
133
+
from sdialog.agents import Agent, final_response_tool
134
+
135
+
@final_response_tool
136
+
defget_report_table(topic: str) -> str:
137
+
return"| Item | Value |\n|---|---|\n| example | 42 |"
138
+
139
+
agent = Agent(tools=[get_report_table])
140
+
141
+
Notes:
142
+
143
+
- Non-empty tool output is returned directly as the agent final answer.
144
+
- Empty tool output falls back to regular tool flow (the LLM can continue and synthesize a response).
145
+
93
146
Few-Shot Learning with Example Dialogs
94
147
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
95
148
Now let's explore one of SDialog's most powerful features! We can guide our dialogues by providing examples that show the system what style, structure, or format we want. This technique, called few-shot learning, works by supplying ``example_dialogs`` to generation components. These exemplar dialogs are injected into the system prompt to steer tone, task format, and conversation flow.
0 commit comments