Open-source explainable observability for AI agent systems. Causal attribution + constitutional governance + visual debugging.

| Tool | Causal | Governance | Sankey | Open Source |
|---|---|---|---|---|
| LangSmith | No | No | No | No |
| Langfuse | No | No | No | Yes |
| Fiddler AI | No | Yes | No | No |
| Braintrust | No | No | No | No |
| Helicone | No | No | No | Yes |
| AgentOps | No | No | No | No |
| AuditTrail | Yes | Yes | Yes | Yes |
Constitutional governance with real-time amber/red boundary detection. Define safety rules in YAML. Monitor compliance across all agents.
Compliance Dashboard
Sankey Attribution
Trace Detail DAG
Analytics Dashboard
Timeline WaterfallSame trace data, two powerful perspectives for understanding agent behavior.

Causal attribution from prompt phrases through reasoning steps to tool calls.

Interactive decision-chain visualization with constitutional compliance overlay.
Three lines of code. Full explainability. No configuration needed.
pip install audittrail -- works with LangGraph, CrewAI, and raw OpenAI SDK.
One command. Full observability UI at localhost:3000. No cloud. No config.
Six core capabilities that transform how you debug, monitor, and govern AI agent systems.
Decision chain visualization with collapse, zoom, and minimap
Prompt-to-tool causal mapping with attribution scores
YAML safety rules with amber boundary detection
Watch agent execution unfold live via WebSocket
Feature importance for tool selection decisions
One-click compliance reports for external auditors