[Feature] External Agent Trust Scoring for AI Agent Identity #64994
Replies: 1 comment
-
|
The proposal frames this as a perimeter problem: query a trust score before granting access. That's necessary but not sufficient. Teleport's existing model answers "can this identity enter?" The gap is everything that happens after the agent is inside: what scope is it authorized to operate within, what's the maximum resource it can consume, and if it exceeds those bounds mid-session, what triggers revocation? An on-chain trust score is a good admission signal. But trust scores are computed over historical behavior — they don't constrain future behavior within a session. A high-trust agent with no session-level scope constraint can still go rogue inside the perimeter. The architectural complement is something closer to an employment contract per agent session: scope of work, max spend, and an audit trail that satisfies regulatory obligations (EU AI Act Art. 12 specifically requires logging for autonomous systems making consequential decisions). Teleport handles the perimeter; the session contract governs what happens after entry. We built this as a formal hiring standard (HAHS) in Hive — it's an open spec, not proprietary infrastructure. The interesting design question for Teleport integration is whether session contracts should be Teleport-native, or whether Teleport should treat them as an external authorization signal the same way this proposal treats trust scores. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Context
RSAC 2026 has made one thing clear: AI agent identity is a top-tier security concern. 20+ vendors announced agent identity solutions this week — Microsoft Agent 365, Cisco Duo Agentic Identity, BeyondTrust Pathfinder, and more.
Teleport already unifies identity across humans, machines, and workloads. AI agents are the natural next frontier.
The Gap
Current agent identity solutions focus on authentication and authorization inside enterprise boundaries. But agents increasingly operate across organizational boundaries — calling external APIs, interacting with third-party agents, accessing shared infrastructure.
What is missing: a trust layer that works across boundaries. Authentication answers who is this agent? Trust scoring answers should I let this agent do this specific thing right now?
Proposal: External Agent Trust Scoring via SATP
SATP (Solana Agent Trust Protocol) provides:
Integration idea for Teleport:
When an AI agent requests access through Teleport, in addition to standard authentication, Teleport could query the agent's on-chain trust score as an additional authorization signal. High trust score = standard access. Low or no trust score = restricted access or additional verification required.
This adds a cross-boundary reputation layer on top of Teleport's existing identity infrastructure.
Key stats (RSAC 2026)
Resources
Happy to discuss integration architecture or provide test endpoints.
Beta Was this translation helpful? Give feedback.
All reactions