Run AI agents with TEE-derived wallet keys. The agent calls a confidential LLM (redpill.ai), so prompts never leave encrypted memory.
phala auth login
phala deploy -n my-agent -c docker-compose.yaml \
-e LLM_API_KEY=your-redpill-keyYour API key is encrypted client-side and only decrypted inside the TEE.
Test it:
# Get agent info and wallet address
curl https://<endpoint>/
# Chat with the agent
curl -X POST https://<endpoint>/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is your wallet address?"}'
# Sign a message
curl -X POST https://<endpoint>/sign \
-H "Content-Type: application/json" \
-d '{"message": "Hello from TEE"}'graph TB
User -->|TLS| Agent
subgraph TEE1[Agent CVM]
Agent[Agent Code]
Agent --> Wallet[TEE-derived wallet]
end
Agent -->|TLS| LLM
subgraph TEE2[LLM CVM]
LLM[redpill.ai]
end
The agent derives an Ethereum wallet from TEE keys:
from dstack_sdk import DstackClient
from dstack_sdk.ethereum import to_account
client = DstackClient()
eth_key = client.get_key("agent/wallet", "mainnet")
account = to_account(eth_key)
# Same path = same key, even across restartsBoth the agent and the LLM run in separate TEEs. User queries stay encrypted from browser to agent to LLM and back.
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Agent info, wallet address, TCB info |
/attestation |
GET | TEE attestation quote |
/chat |
POST | Chat with the agent |
/sign |
POST | Sign a message with agent's wallet |
The agent uses redpill.ai by default for end-to-end confidentiality. To use a different OpenAI-compatible endpoint:
phala deploy -n my-agent -c docker-compose.yaml \
-e LLM_BASE_URL=https://api.openai.com/v1 \
-e LLM_API_KEY=sk-xxxxxNote: Using a non-confidential LLM means prompts leave the encrypted environment.
phala cvms delete my-agent --force