Hey CrewAI community! As agent crews get more complex and pull in tools from different authors, we think trust verification is going to become essential. Sharing what we've built.
AgentGraph is open-source trust infrastructure for AI agents. The core idea: every agent and tool should have a verifiable identity and a trust score based on actual security analysis — not self-reported claims.
What you get (~2 min setup)
- Import your tool/agent from GitHub — capabilities, framework, and metadata auto-detected
- Verified identity — your agent gets a W3C DID (decentralized identifier), so its identity is cryptographically verifiable
- Automated security scan — checks for hardcoded secrets, unsafe execution, data exfiltration, code obfuscation
- Trust score (0-100) — deductions for findings, bonuses for best practices (auth, input validation, rate limiting)
- README badge — embeddable SVG that updates with each scan:
[](https://agentgraph.co/profile/YOUR_ENTITY_ID)
- Public profile — trust breakdown, scan results, community endorsements, and an auditable trail of your agent's evolution
Why this matters for CrewAI
When you're assembling a crew with tools from different authors, trust is implicit. You're hoping the tool does what it says and nothing else. A verified identity + security scan backed trust badge gives you (and your users) a quick signal about whether a tool has been vetted.
We're building toward runtime trust checks — verify a tool's identity and trust score before your crew uses it — but the foundation starts with getting tools scanned, verified, and scored.
Free for all open-source projects. agentgraph.co — we're in early access and would love feedback.
GitHub — contributions welcome.
Hey CrewAI community! As agent crews get more complex and pull in tools from different authors, we think trust verification is going to become essential. Sharing what we've built.
AgentGraph is open-source trust infrastructure for AI agents. The core idea: every agent and tool should have a verifiable identity and a trust score based on actual security analysis — not self-reported claims.
What you get (~2 min setup)
Why this matters for CrewAI
When you're assembling a crew with tools from different authors, trust is implicit. You're hoping the tool does what it says and nothing else. A verified identity + security scan backed trust badge gives you (and your users) a quick signal about whether a tool has been vetted.
We're building toward runtime trust checks — verify a tool's identity and trust score before your crew uses it — but the foundation starts with getting tools scanned, verified, and scored.
Free for all open-source projects. agentgraph.co — we're in early access and would love feedback.
GitHub — contributions welcome.