Communicate with AI agents inside sandboxes over WebSocket.
| Package | Language | Role | Install |
|---|---|---|---|
runtimeuse |
TypeScript | Agent runtime (runs inside the sandbox) | npm install runtimeuse |
runtimeuse-client |
Python | Client (connects from outside the sandbox) | pip install runtimeuse-client |
npx -y runtimeuse@latestThis starts a WebSocket server on port 8080 using the OpenAI agent handler by default. Use --agent claude for Claude.
import asyncio
import json
from runtimeuse_client import RuntimeUseClient, InvocationMessage, ResultMessageInterface
async def main():
client = RuntimeUseClient(ws_url="ws://localhost:8080")
invocation = InvocationMessage(
message_type="invocation_message",
source_id="my-run-001",
model="gpt-4.1",
system_prompt="You are a helpful assistant.",
user_prompt="What is 2 + 2?",
output_format_json_schema_str=json.dumps({
"type": "json_schema",
"schema": {
"type": "object",
"properties": {"answer": {"type": "string"}},
},
}),
secrets_to_redact=[],
)
async def on_result(result: ResultMessageInterface):
print(result.structured_output)
await client.invoke(
invocation=invocation,
on_result_message=on_result,
result_message_cls=ResultMessageInterface,
)
asyncio.run(main())import { RuntimeUseServer, openaiHandler } from "runtimeuse";
const server = new RuntimeUseServer({ handler: openaiHandler, port: 8080 });
await server.start();Python Client ──> WebSocket ──> Runtime (in sandbox) ──> AgentHandler
├── openai (default)
└── claude
- The client sends an
InvocationMessageover WebSocket - The runtime downloads files and runs pre-commands (if any)
- The
AgentHandlerexecutes the agent with the given prompts and model - Intermediate
AssistantMessages stream back to the client - Files in the artifacts directory are auto-detected and uploaded via presigned URL handshake
- A final
ResultMessagewith structured output is sent back - The runtime runs post-commands (if any)
See the runtime README and client README for full API docs.
BSL-1.1