The Model Context Protocol is becoming the standard for connecting AI agents to tools. But MCP includes no built-in authentication, authorization, or audit capabilities. Its rapid adoption has outpaced security by design. Behavry is the governance layer the protocol doesn't include.
// the mcp attack surface
The OWASP GenAI Security Project identified six categories of MCP vulnerabilities. They cannot be fixed at the implementation layer alone — protocol-layer enforcement is required.
An MCP server modifies its tool definitions after initial connection to inject hidden behaviors. The agent trusts the schema from the first handshake — the server changes it later. No client-side validation catches this.
An MCP server behaves legitimately during testing and evaluation, then changes behavior in production. The tool description says "read file" — but the server now exfiltrates the contents. No schema enforcement prevents this.
MCP server responses contain executable payloads that the agent processes as instructions. The boundary between "data returned by a tool" and "instructions for the agent" is undefined in the protocol.
MCP's credential passthrough model means agent credentials are exposed to every server the agent connects to. A compromised server captures credentials intended for other services. No token delegation by default.
MCP has no built-in permission model. Agents connect with whatever credentials they have. There's no per-tool RBAC, no least-privilege enforcement, and no way to scope what a specific tool call can access.
Multiple MCP servers share the agent's context and credentials. A compromised server can influence the agent's behavior with other servers. No trust boundary between tools connected to the same agent.
// transport layer vs. governance layer
MCP gateways and Behavry operate at different layers. They're complementary, not competitive.
MCP gateways handle protocol routing, connection management, TLS termination, and basic rate limiting. They ensure the connection between the agent and the server is secure and properly routed. They have no concept of agent identity, behavioral baselines, or what the tool call actually does.
Behavry operates above the transport. Every tool call is authenticated to a specific agent identity, scanned for sensitive data, evaluated against OPA policies, and immutably logged. The Decision Trace links intent to action to outcome as a causal chain. Behavry works alongside your existing MCP gateway — or directly with MCP servers.
// owasp mcp guide alignment
The OWASP GenAI Security Project published comprehensive guidelines for secure MCP deployment. Here's how Behavry implements each one.
| OWASP Recommendation | Behavry Implementation |
|---|---|
| Treat agents as first-class identities with unique credentials and scoped permissions | Agent Identity Service: per-agent JWT RS256 credentials, short-lived tokens, no shared API keys |
| Centralize policy enforcement through a dedicated gateway layer | Inline MCP proxy with OPA Rego policy evaluation on every tool call before execution |
| Implement token delegation over credential passthrough (RFC 8693) | Short-lived scoped tokens per agent per resource; vault-based secret management; delegation token chains for multi-agent workflows |
| Maintain comprehensive audit logs for all tool invocations | Immutable TimescaleDB audit trail with SHA-256 hash chaining; Decision Trace as causal chain-of-custody artifact |
| Validate tool schemas and detect tool poisoning | Tool allowlists in OPA policy; cryptographic signing; schema validation at policy layer |
| Enforce least privilege and per-tool RBAC | Per-agent, per-scenario Rego policies; blast radius limits; risk-adaptive permission tiers driven by BRF score |
// enrolled mcp clients
Behavry ships with fingerprinted support for the most widely deployed MCP clients. Agents point their MCP configuration at the Behavry proxy — no client-side changes required.
Plus 20 AI surfaces covered via browser extension and API proxies — including ChatGPT, Gemini, Ollama, and custom agent frameworks. See the full integration map →
// frequently asked questions
Behavry is a transparent proxy. For allowed actions, messages pass through unmodified. For blocked actions, the proxy returns an error response to the agent — the tool call never reaches the target server. For DLP-flagged content, the proxy can redact sensitive data before forwarding. The agent and server see standard MCP protocol behavior in all cases.
OPA policy evaluation happens in microseconds. DLP pattern matching is sub-millisecond. Total proxy overhead is under 5ms per request. For context, a typical LLM inference call takes 500ms-5s. The governance layer is negligible relative to the agent's own processing time.
Yes. Behavry supports four deployment models: SaaS (hosted, fastest start), Hybrid (control plane SaaS, data plane on-prem), BYOC (Bring Your Own Cloud — full stack in your cloud account), and Self-Hosted (air-gapped, no external dependencies). Every deployment model provides identical governance capabilities.
Yes. Behavry's proxy works with any MCP server that implements the Streamable HTTP or stdio transport. No server-side changes required. The proxy authenticates the agent, evaluates policy on the tool call, and forwards to whatever target server is configured — whether it's a standard open-source MCP server or your custom internal tooling.