Releases: charmbracelet/fantasy
v0.20.0
Changelog
New!
- 95dcd6e: feat(anthropic): add document support for pdf and text file content blocks (#197) (@Nic-vdwalt)
Other stuff
- 471520c: fix(anthropic/openai/google): wrap
io.ErrUnexpectedEOFasProviderError(#198) (@ljuti) - b852eff: v0.20.0 (@andreynering)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.19.0
Changelog
New!
- f60d4fe: feat(anthropic): add
EffortXHighconstant (#204) (@DanielleMaywood)
Other stuff
- b2c4b61: v0.19.0 (@andreynering)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.18.0
Changelog
New!
- 61bc0b2: feat(agent): add the ability to stop a turn and end the agent loop (@meowgorithm)
Other stuff
- 402a113: v0.18.0 (@meowgorithm)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.17.2
Changelog
Fixed
- bbf53dc: fix(agent): buffer tool calls (@meowgorithm)
Other stuff
- ce26050: ci: fix govulncheck by updating to go 1.26.2 (#201) (@andreynering)
- d9b6308: v0.17.2 (@meowgorithm)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.17.1
OpenAI & Compat fixes
This release includes a couple fixes for OpenAI and OpenAI-compatible providers.
Some missing pieces for OpenAI streaming were added.
We also added some missing constants for reasoning effort levels (none, minimal, xhigh) and made sure they are respected.
Changelog
Other stuff
- 4620329: fix(providers/openai): emit source parts for Responses API streaming annotations (#187) (@kylecarbs)
- d13521a chore(openai): add missing constants and checks for some thinking effort levels (#86) (@ibetitsmike and @andreynering)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.17.0
Anthropic Computer Use
Fantasy now supports Anthropic Computer Use, thanks to a contribution from @hugodutka from our friends at @coder.
Want to see how it works? Check out this example.
Changelog
New!
- 0cab8bf: feat: anthropic computer use (#185) (@hugodutka)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.16.0
Friday patch
Hey all. Here's a small list of changes:
- Added new
bedrock.WithBaseURL - Fixed errors related to thinking replays with OpenAI when using
store: false(the default) - Fixed issue with tools calls in GitHub Copilot
- Improved compatibility on tools calls with Ollama
Changelog
New!
- 277f9fb: feat(bedrock): add WithBaseURL option (@aleksclark)
Fixed
- 8924b01: fix(bedrock): apply base URL override after bedrock.WithConfig (@aleksclark)
- aa7e82f: fix(bedrock): don't default baseURL to anthropic API when using bedrock (@aleksclark)
- a5bee40: fix(openai): relax tool call validation for ollama compatibility (#113) (@Gustave-241021)
- 238e34d: fix: address tool calls with empty arguments in copilot (#156) (@mavaa)
Other stuff
- ee77281: fix(providers/openai): skip ephemeral replay items (@ibetitsmike)
- 236fedf: fix(providertests/testdata): update summary thinking fixtures (@ibetitsmike)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.15.1
Fixes
A fix was made for the store: true option released yesterday. This ensures you san use it without errors. Thank @kylecarbs!
Also, we're now using an internal fork of the OpenAI SDK because of a fix for SSE events. For more information, see this PR. Hopefully the fix will be merged upstream soon so we can target upstream again. Known providers affected are: Avian and OpenRouter (when using with a custom User-Agent header).
Changelog
Fixed
- dff62fa: fix(openai): skip reasoning items in Responses API replay (#181) (@kylecarbs)
- 6bb474f: fix: migrate the openai sdk to our internal fork (@andreynering)
Other stuff
- c3f0da5: test(openrouter): simplify list of providers and models to test (@andreynering)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.15.0
Changelog
New!
- f910b4c: feat(agent): allow empty prompt when messages exist (#179) (@andreynering)
- 09d2b74: feat: support
text/*files (#100) (@caarlos0)
Fixed
- d749d13: fix(anthropic): ToolChoiceNone should send tool_choice:{type:"none"} to API (#178) (@andreynering)
- 1568c97: fix(anthropic): anthropic with vertex works with service account json keys (#157) (@Cali0707)
Other stuff
- d120cc3: v0.15.0 (@andreynering)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
v0.14.0
Better OpenAI prompt caching + fixed token reporting
This release has two changes for OpenAI:
Better prompt caching
This release adds proper supports for to OpenAI's Responses API store: true and previous_response_id fields. With this setting, you can ask OpenAI to store the conversation as it happens, and continue from there by passing the previous_response_id in the next request. This means a faster agent and less token usage.
You can read more about it here:
- https://developers.openai.com/cookbook/examples/prompt_caching_201
- https://developers.openai.com/api/docs/guides/migrate-to-responses
Fixed OpenAI token reporting
We made a fix for the usage.InputTokens field, which was higher than expected. We need to subtract the cached tokens from that value, and from this release it'll be correct.
Keep fantasizing 👻
Charm
Changelog
New!
- 0c8663f: feat(openai): add responses api
store,previous_response_id, andresponse.idsupport (#175) (@ibetitsmike)
Fixed
- 22c3e9a: fix(openai): subtract cached tokens from input tokens to avoid double counting (#176) (@andreynering)
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
