Security platform for AI agents.
Detect threats, enforce policies, and investigate incidents across your entire agent fleet. Sub-100ms. Fail-open by design. One line to integrate.
Platform capabilities
Multi-layered detection
Pattern filters, ML classification, DLP scanning, custom rules, and cross-event correlation working in sequence. Each layer catches what the previous layer missed.
Intent-based policies
Write policies in plain English. “Deny Bash when command contains curl.” Scope to tenant, cluster, or individual agent. Monitor, flag, or block.
Threat intelligence
Graph-correlated risk scoring across agents, sessions, and events. Composite scores weight threat rate, resolution speed, coverage, and compliance.
Fleet management
Enroll devices, manage groups, deploy policies, and detect tampered hooks across your entire agent fleet from a single dashboard.
Session intelligence
Full session history with semantic search, tool forensics, and AI-generated summaries. Trace an attack from first contact to containment.
Codebase scanning
SBOM generation, agent topology mapping, and dependency risk scoring. Know what your agents can do before they do it.
Detection pipeline
Every scan passes through multiple detection layers in sequence:
Pattern filters run first and catch known attack signatures in under 1ms. ML classification handles novel attacks. DLP catches API keys, PII, and secrets. Custom rules map to MITRE ATT&CK techniques. Correlation links events across sessions to detect multi-step exfiltration chains.
Integrations
Works with the frameworks you already use.
| Framework | Integration | Lines of code |
|---|---|---|
| LangChain | Callback handler | 3 |
| CrewAI | Step callback | 3 |
| Claude SDK | Async hooks | 3 |
| OpenAI Agents | Input guardrail | 3 |
| Google ADK | Before-model callback | 3 |
| LiteLLM | SDK callback + proxy guardrail | 3 |
| Strands | Hook provider | 3 |
| Vertex AI | Model wrapper | 3 |
| Any framework | Direct scan() call | 4 |
The CLI installs hooks into Claude Code, Cursor, Gemini CLI, Windsurf, and other coding agents.
Quick example
from burrow.sdk import BurrowGuard
from burrow.sdk.langchain import create_langchain_callback
from langchain_openai import ChatOpenAI
guard = BurrowGuard(client_id="...", client_secret="...")
callback = create_langchain_callback(guard, agent="my-chain")
model = ChatOpenAI(callbacks=[callback])Every message and tool call now runs through Burrow before reaching your LLM.
BLOCK 95% prompt_injection
request_id=req_abc123 latency=42ms