Typed async Rust client for Anthropic's Messages API with structured requests, SSE streaming, and tool-use payloads.
Prompt / image blocks / tool schema
|
v
+-------------------------------------------+
| MessagesRequestBuilder |
| model | max_tokens | temperature | tools |
+---------------------------+---------------+
| build()
v
+-------------------------------------------+
| Client / ClientBuilder |
| env | headers | timeout | backoff |
+---------------+-------------------+-------+
| |
| messages() | messages_stream()
v v
POST /v1/messages POST /v1/messages + stream=true
| |
v v
+-------------------------+ +-----------------------------+
| MessagesResponse | | MessagesStreamEvent |
| content | usage | stop | | start | delta | stop |
+-------------------------+ +-----------------------------+
429 responses are retried with exponential backoff before surfacing an error.
- Clone the repo.
git clone https://github.com/AbdelStark/anthropic-rs.git
cd anthropic-rs- Export your API key.
export ANTHROPIC_API_KEY="sk-ant-..."- Run the basic example.
cargo run --manifest-path examples/basic-messages/Cargo.tomlExpected output:
messages response:
MessagesResponse {
id: "msg_...",
message_type: "message",
role: Assistant,
content: [
Text {
text: "Hello ...",
},
],
usage: Usage {
input_tokens: ...,
output_tokens: ...,
},
}
use anthropic::types::{Message, MessagesRequestBuilder};
use anthropic::Client;
let client = Client::from_env()?;
let request = MessagesRequestBuilder::new(
"claude-3-5-sonnet-20240620",
vec![Message::user("Summarize this diff in one sentence.")],
256,
)
.temperature(0.2)
.build()?;
let response = client.messages(request).await?;
println!("{}", response.text());Message::user/Message::assistantwrap a single text block β no enum literals required.MessagesResponse::text(),first_text(), andtool_uses()pull structured data back out without matching onContentBlockby hand.temperature(0.2)keeps output tighter and less varied; the builder also validates non-emptymodel/messagesand non-zeromax_tokens.
use anthropic::stream::StreamAccumulator;
use anthropic::types::{ContentBlockDelta, Message, MessagesRequestBuilder, MessagesStreamEvent};
use anthropic::Client;
use tokio_stream::StreamExt;
let client = Client::from_env()?;
let request = MessagesRequestBuilder::new(
"claude-3-5-sonnet-20240620",
vec![Message::user("Stream a short release note.")],
128,
)
.build()?;
let mut stream = client.messages_stream(request).await?;
let mut accumulator = StreamAccumulator::new();
while let Some(event) = stream.next().await {
let event = event?;
if let MessagesStreamEvent::ContentBlockDelta {
delta: ContentBlockDelta::TextDelta { text },
..
} = &event {
print!("{text}");
}
accumulator.push(event)?;
}
let response = accumulator.finish()?;StreamAccumulatorfolds everyMessagesStreamEventinto aMessagesResponse, handling text, tool-useinput_json_deltachunks, extended-thinkingthinking_delta/signature_delta, and usage / stop-reason updates.- Prefer
anthropic::stream::collect(stream).awaitwhen you just want the final response without any per-event processing.
use anthropic::tool_loop::{run_tool_loop, ToolLoopConfig, ToolOutput};
use anthropic::types::{Message, MessagesRequestBuilder, Tool, ToolChoice};
use anthropic::Client;
use serde_json::json;
let client = Client::from_env()?;
let request = MessagesRequestBuilder::new(
"claude-3-5-sonnet-20240620",
vec![Message::user("What's the weather in Paris?")],
512,
)
.tools(vec![Tool::new(
"get_weather",
"Fetch current weather for a city",
json!({
"type": "object",
"properties": { "city": { "type": "string" } },
"required": ["city"]
}),
)])
.tool_choice(ToolChoice::Auto)
.build()?;
let response = run_tool_loop(
&client,
request,
|name, input| async move {
assert_eq!(name, "get_weather");
let city = input["city"].as_str().unwrap_or("");
Ok(ToolOutput::ok(format!("{city}: 22C and sunny")))
},
ToolLoopConfig::default(),
)
.await?;
println!("{}", response.text());run_tool_loophandles the entire call-execute-reply cycle β it clones the original request each iteration (keepingtools/tool_choice/systemintact), collects everytool_useblock from the assistant turn, awaits your executor on each one in order, appends the matchingtool_resultblocks, and stops once the model returns a tool-free response ormax_iterationsis hit.- Return
ToolOutput::error("...")to surface a tool-level failure to the model; returnErr(AnthropicError)to abort the loop instead.
use anthropic::types::{CacheControl, ContentBlock, Message, MessagesRequestBuilder, Role, ServiceTier, ThinkingConfig};
let request = MessagesRequestBuilder::new(
"claude-3-5-sonnet-20240620",
vec![
Message::new(
Role::User,
vec![
ContentBlock::image_url("https://example.com/chart.png"),
ContentBlock::document_url("https://example.com/handbook.pdf"),
ContentBlock::text("Summarize both attachments."),
],
),
],
1024,
)
.system("You are a careful analyst.")
.thinking(ThinkingConfig::enabled(2048))
.service_tier(ServiceTier::Auto)
.tools(vec![]) // add tool schemas as needed
.build()?;
// Tag any cacheable block or tool with a CacheControl marker:
let cached_prompt = ContentBlock::text("...long system context...")
.with_cache_control(CacheControl::ephemeral());CacheControl::ephemeral()/::ephemeral_ttl("1h")attach a cache marker to anyText/Image/Document/ToolUse/ToolResultblock or tool definition.ContentBlockconstructors cover base64 / URL images, base64 / URL / inline text documents, tool-use + tool-result (ok and error), thinking (with optional signature), and plain text.ThinkingConfig::enabled(budget)turns on extended thinking;ServiceTier::StandardOnlyopts out of priority routing.
use anthropic::count_tokens::CountTokensRequestBuilder;
use anthropic::models::ListModelsParams;
use anthropic::types::Message;
let count = client
.count_tokens(
CountTokensRequestBuilder::new("claude-3-5-sonnet-20240620", vec![Message::user("hi")]).build()?,
)
.await?;
println!("this request would cost {} input tokens", count.input_tokens);
let models = client.list_models(&ListModelsParams::new().limit(20)).await?;
for m in &models.data {
println!("{} - {}", m.id, m.display_name);
}
let detail = client.get_model("claude-3-5-sonnet-20240620").await?;use anthropic::batches::{BatchRequest, CreateBatchRequest, ListBatchesParams};
use anthropic::types::{Message, MessagesRequestBuilder};
let batch = client
.create_batch(CreateBatchRequest::new(vec![
BatchRequest::new(
"req_1",
MessagesRequestBuilder::new("claude-3-5-sonnet-20240620", vec![Message::user("hi")], 64).build()?,
),
BatchRequest::new(
"req_2",
MessagesRequestBuilder::new("claude-3-5-sonnet-20240620", vec![Message::user("bye")], 64).build()?,
),
]))
.await?;
// Poll until the batch finishes...
let batch = client.get_batch(&batch.id).await?;
if batch.is_complete() {
for item in client.get_batch_results(&batch.id).await? {
println!("{}: {:?}", item.custom_id, item.result);
}
}
// Or page through every batch on the workspace:
let _list = client.list_batches(&ListBatchesParams::new().limit(10)).await?;
// ...cancel_batch / delete_batch round out the CRUD surface.| Variable | Default | Description |
|---|---|---|
ANTHROPIC_API_KEY |
none | Required API key used for the x-api-key header. |
ANTHROPIC_API_BASE |
https://api.anthropic.com |
Override the API base URL. |
ANTHROPIC_API_VERSION |
2023-06-01 |
Sets the anthropic-version header. |
ANTHROPIC_BETA |
none | Optional anthropic-beta header for beta features. |
ANTHROPIC_TIMEOUT_SECS |
60 |
Request timeout in seconds when building from env. |
| Call | Returns | Notes |
|---|---|---|
Client::new(api_key) / Client::builder() |
Result<Client, AnthropicError> |
Manual setup when you do not want env-based config. |
Client::from_env() |
Result<Client, AnthropicError> |
Reads the environment variables above. |
client.messages(request) |
Result<MessagesResponse, AnthropicError> |
Rejects stream=true requests. |
client.messages_stream(request) |
Result<MessagesResponseStream, AnthropicError> |
Opens an SSE stream and yields typed events. |
client.count_tokens(request) |
Result<CountTokensResponse, AnthropicError> |
POST /v1/messages/count_tokens. |
client.list_models(¶ms) / client.get_model(id) |
Result<ModelList / Model, AnthropicError> |
GET /v1/models with pagination. |
client.create_batch(request) |
Result<MessageBatch, AnthropicError> |
POST /v1/messages/batches with local non-empty validation. |
client.list_batches(¶ms) / client.get_batch(id) |
Result<MessageBatchList / MessageBatch, AnthropicError> |
List and poll batches. |
client.cancel_batch(id) / client.delete_batch(id) |
Result<.., AnthropicError> |
Batch lifecycle management. |
client.get_batch_results(id) |
Result<Vec<BatchResultItem>, AnthropicError> |
Download + parse the JSONL results file. |
StreamAccumulator / anthropic::stream::collect |
Result<MessagesResponse, AnthropicError> |
Folds a live SSE stream into a full response. |
run_tool_loop(&client, request, executor, config) |
Result<MessagesResponse, AnthropicError> |
Agentic call/execute/reply loop with iteration budget. |
ClientBuilder::backoff(...) |
ClientBuilder |
Customizes retry behavior for cloneable requests. |
MessagesRequestBuilder::backoff(...) / .no_retries() / .retry_policy(...) |
MessagesRequestBuilder |
Per-call retry override β opt out of retries on interactive paths or stretch them for background workers without rebuilding the client. Also available on CountTokensRequestBuilder and CreateBatchRequest. |
| Feature | Default | What it does |
|---|---|---|
rustls |
β | TLS via rustls + native root certs (pulled from reqwest). |
native-tls |
Swap to the system-native TLS stack. | |
tracing |
Emit structured tracing spans around every HTTP call on the transport critical path (anthropic.http), carrying method, path, status, attempts, and duration_ms fields, plus per-attempt debug events. Compiled out entirely when the feature is off. |
Enable tracing in your Cargo.toml:
[dependencies]
anthropic = { version = "0.1", features = ["tracing"] }
tracing-subscriber = "0.3"Then install any tracing subscriber at startup (for example
tracing_subscriber::fmt::init()), and every /v1/* call will show up
as an anthropic.http span with the fields listed above.
Use the example crate as a CI smoke test inside GitHub Actions:
name: anthropic-smoke
on:
workflow_dispatch:
jobs:
basic-messages:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- run: cargo run --manifest-path examples/basic-messages/Cargo.toml
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}That exercises Client::from_env(), request building, and /v1/messages end to end.
User-visible changes are tracked in CHANGELOG.md.
Thanks goes to these wonderful people (emoji key):
![]() AβΏdel β/21M πΊ - π± π» |
![]() ofalvai π» |
![]() JohnAllen π» |
![]() Philipp-M π» |
![]() wyatt-avilla π» |
![]() aoikurokawa π» |
This project follows the all-contributors specification. Contributions of any kind welcome!





