Communicate with AI agents inside sandboxes over WebSocket.
| Package | Language | Role | Install |
|---|---|---|---|
runtimeuse |
TypeScript | Agent runtime (runs inside the sandbox) | npm install runtimeuse |
runtimeuse-client |
Python | Client (connects from outside the sandbox) | pip install runtimeuse-client |
npx -y runtimeuse@latestThis starts a WebSocket server on port 8080 using the OpenAI agent handler by default. Use --agent claude for Claude.
import asyncio
from runtimeuse_client import RuntimeUseClient, QueryOptions
async def main():
client = RuntimeUseClient(ws_url="ws://localhost:8080")
result = await client.query(
prompt="What is 2 + 2?",
options=QueryOptions(
system_prompt="You are a helpful assistant.",
model="gpt-4.1",
),
)
print(result.data.text)
asyncio.run(main())import { RuntimeUseServer, openaiHandler } from "runtimeuse";
const server = new RuntimeUseServer({ handler: openaiHandler, port: 8080 });
await server.startListening();Python Client ──> WebSocket ──> Runtime (in sandbox) ──> AgentHandler
├── openai (default)
└── claude
- The client sends an
InvocationMessageover WebSocket - The runtime downloads files and runs pre-commands (if any)
- The
AgentHandlerexecutes the agent with the given prompts and model - Intermediate
AssistantMessages stream back to the client - Files in the artifacts directory are auto-detected and uploaded via presigned URL handshake
- A final
ResultMessagewith structured output is sent back - The runtime runs post-commands (if any)
See the runtime README and client README for full API docs.
BSL-1.1