Unified SDKs for verifiable ML inference on Starknet. Prove any model, verify on-chain in 1 transaction with full OODS + Merkle + FRI + PoW (trustless) verification.
| Package | Language | Install | Description |
|---|---|---|---|
| @obelyzk/sdk | TypeScript | npm install @obelyzk/sdk |
Full-featured prover client with async jobs |
| obelyzk | Python | pip install obelyzk |
Pythonic API with sync and async support |
| @obelyzk/cli | CLI | npm install -g @obelyzk/cli |
Command-line proving and submission |
| @obelyzk/mcp-server | MCP | npm install @obelyzk/mcp-server |
Claude AI tool integration (40+ tools) |
| Feature | TypeScript | Python | CLI |
|---|---|---|---|
| Prove a model | client.prove({ model, input }) |
client.prove(model, input) |
obelysk prove --model --input |
| List models | client.getModels() |
client.models() |
obelysk models |
| Attestation | client.attest({ model, input }) |
client.attest(model, input) |
obelysk prove --on-chain |
| Async support | Native Promises | AsyncObelyzkClient |
Background jobs |
| Job polling | client.getJob(id) |
client.job(id) |
obelysk status --job |
| Config | Constructor options | Constructor args | obelysk config |
import { createProverClient } from "@obelyzk/sdk";
const client = createProverClient();
const result = await client.prove({
model: "smollm2-135m",
input: [1.0, 2.0, 3.0],
onChain: true,
});
console.log("Proof TX:", result.txHash);
console.log("Verified:", result.verified);See the full TypeScript SDK README for API reference and examples.
from obelyzk import ObelyzkClient
client = ObelyzkClient()
result = client.prove(
model="smollm2-135m",
input=[1.0, 2.0, 3.0],
on_chain=True,
)
print(f"Proof TX: {result.tx_hash}")
print(f"Verified: {result.verified}")See the full Python SDK README for API reference and async usage.
# Install
npm install -g @obelyzk/cli
# Prove and verify on-chain
obelysk prove --model smollm2-135m --input "Hello world" --on-chain
# List available models
obelysk modelsSee the full CLI README for all commands and flags.
If you prefer raw HTTP, the hosted API works with plain curl. No SDK installation needed.
# Health check (no auth)
curl https://api.bitsage.network/health
# Prove a model
curl -X POST https://api.bitsage.network/api/v1/infer \
-H "Authorization: Bearer $OBELYSK_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model_id":"smollm2-135m","input":[1.0,2.0,3.0],"gpu":true}'
# Check on-chain verification
starkli call 0x1c208a5fe731c0d03b098b524f274c537587ea1d43d903838cc4a2bf90c40c7 \
get_recursive_verification_count 0x7214ee0e9c30e3e6748651d42f941c4b875a5b0a549223f92471a58585c980Auth: Authorization: Bearer <API_KEY> header on every request. Set OBELYSK_API_KEY in your environment.
- Your SDK call hits the hosted GPU prover at
https://api.bitsage.network - The prover executes the model over the M31 field and generates a GKR sumcheck proof
- A recursive STARK compresses the proof to ~942 felts (constant size, 49x compression)
- The proof is verified on Starknet Sepolia in a single transaction using full OODS + Merkle + FRI + PoW (trustless)
All proofs are verified by the ObelyZK Recursive Verifier contract on Starknet Sepolia:
- Contract:
0x1c208a5fe731c0d03b098b524f274c537587ea1d43d903838cc4a2bf90c40c7 - Verification: Full OODS + Merkle + FRI + PoW (trustless)
- Network: Starknet Sepolia
- Felts: ~942 per proof
- Compression: 49x vs raw GKR data
- Cost: ~$0.02 per verification
| Model | Params | Prove Time (GPU) | Recursive Felts |
|---|---|---|---|
| SmolLM2-135M | 135M | ~102s | 942 |
| Qwen2-0.5B | 500M | ~45s | ~900 |
| Phi-3-mini | 3.8B | ~180s | ~950 |
| Custom HuggingFace | Any LLaMA/Qwen/Phi | Varies | ~950 |
| Variable | Description | Required |
|---|---|---|
OBELYSK_API_KEY |
API key for hosted prover | For hosted |
OBELYSK_PROVER_URL |
Custom prover URL | For self-hosted |
STARKNET_ACCOUNT |
Starknet account address | For on-chain |
STARKNET_PRIVATE_KEY |
Starknet private key | For on-chain |
All SDKs accept a prover URL. Default is the hosted service:
// Use hosted prover (default)
const client = createProverClient();
// Use your own GPU prover
const client = createProverClient({ url: "http://your-gpu:8080" });