AuditTrail Quickstart Guide
Prerequisites
- Docker and Docker Compose (recommended)
- Python 3.12+ -- for running agents
- An OpenAI API key -- for the example LangGraph agent (uses gpt-4o-mini)
1. Start AuditTrail
cd AuditTrailCodebase
docker compose up --build
| Service | URL | Description |
|---|---|---|
| Dashboard | http://localhost:3000 | Next.js frontend |
| API | http://localhost:8000 | FastAPI backend |
| Proxy | http://localhost:80 | Nginx reverse proxy |
Verify the API is running:
curl http://localhost:8000/api/v1/health
# {"status":"ok","version":"1.0.0","db_connected":true,...}
2. Create Your Account
- Open http://localhost:3000
- Click Register -- create an account with email and password
- Log in with your credentials
- You'll see an empty dashboard -- that's correct, you have no traces yet
3. Create an API Key
Your agents need an API key to send traces to your account.
- Go to Profile (click your avatar in the sidebar)
- In the API Keys section, enter a key name (e.g., "My Agent")
- Select scope ingest and click the + button
- A yellow box appears with your secret key -- copy it immediately!
The secret key starts with
sk-at-...and is shown only once. If you lose it, delete the key and create a new one.
- Save the key in your agent's environment:
export AUDITTRAIL_API_KEY=sk-at-your-secret-key-here
4. Run Your First Agent
cd examples/langgraph-agent
# Install dependencies
pip install -r requirements.txt
# Set your keys
export OPENAI_API_KEY=sk-proj-...
export AUDITTRAIL_API_KEY=sk-at-...
# Run the template agent
python template.py "Search for the latest AI developments and summarize them"
You'll see:
[OK] API key configured (sk-at-384327...)
[OK] Trace registered: abc123...
[LIVE] Open http://localhost:3000/traces/abc123... to watch live
...
[LIVE] All spans sent incrementally. Trace complete.
5. View Your Trace
Open the trace URL printed by the agent. You'll see:
- Spans -- Every step the agent took (LLM calls, tool executions)
- DAG -- Interactive decision-chain graph with hierarchical grouping
- Timeline -- Waterfall view of sequential and parallel execution
- Sankey -- Causal attribution diagram (click "Run Ablation" to generate)
The trace shows as Running while the agent executes and transitions to Complete when done.
Try the Full Agent (7 tools, rich DAG)
python agent.py "Research climate change impacts, check current time, look up reports from the database, read the agent_config.json file, calculate crop yield at 8 percent per degree, compare output of 92 tons against target of 100, and summarize findings"
Agent Integration Guide
Authentication
All agents authenticate via API key in the Authorization header:
headers = {"Authorization": "Bearer sk-at-your-key-here"}
httpx.post(f"{API}/api/v1/ingest/spans", json={"batch": spans}, headers=headers)
Cost Calculation
AuditTrail calculates costs automatically using a three-tier system:
| Priority | Source | How | Badge Color |
|---|---|---|---|
| 1 | Agent-reported | Agent sends cost field in span | Green "Agent" |
| 2 | Pricing table | Backend calculates from model + tokens | Blue "Estimated" |
| 3 | Default rate | Model not in pricing table, uses fallback | Yellow "Default Rate" |
You don't need to calculate cost in your agent code. Just send model and tokens_in/tokens_out — the backend applies rates from rules/pricing.yaml. Each span shows a badge indicating where the cost came from.
To override: send cost=0.05 in your span and it will be used as-is (source: "agent").
To customize rates: edit rules/pricing.yaml — rates are per 1M tokens in USD.
To view current rates: Settings page → "Model Pricing" section, or GET /api/v1/settings/pricing.
Traces are automatically assigned to the user who owns the API key. Each user only sees their own traces.
Option 1: Copy the Template (Fastest)
cp examples/langgraph-agent/template.py my_agent.py
# Edit: replace tools, update AGENT_NAME, set AUDITTRAIL_API_KEY
python my_agent.py "Your prompt"
The template includes the AuditTrailIngestor class with authentication, live tracing, and correct formats built in.
Option 2: HTTP Ingest API (Any Framework)
Send spans directly via HTTP. Works with any language or framework.
Register a Trace
import httpx
from uuid import uuid4
API = "http://localhost:8000"
headers = {"Authorization": f"Bearer {os.environ['AUDITTRAIL_API_KEY']}"}
trace_id = str(uuid4())
httpx.post(f"{API}/api/v1/ingest/traces", json={
"trace": {
"trace_id": trace_id,
"agent_name": "my-agent",
"environment": "development",
"metadata": {"user_prompt": "..."},
}
}, headers=headers)
Send Spans
spans = [{
"span_id": str(uuid4()),
"trace_id": trace_id,
"parent_span_id": None,
"name": "my_llm_call",
"span_type": "llm", # llm | tool | agent | chain | retriever | custom
"start_time": datetime.now(UTC).isoformat(),
"end_time": (datetime.now(UTC) + timedelta(milliseconds=500)).isoformat(),
"status": "ok", # ok | error | running
"input": {"prompt": "..."},
"output": {"response": "..."},
"model": "gpt-4o-mini",
"tokens_in": 100,
"tokens_out": 50,
"cost": 0.0001,
"tool_calls": None,
"attributes": {},
}]
resp = httpx.post(f"{API}/api/v1/ingest/spans", json={"batch": spans}, headers=headers)
Live Tracing Pattern
For real-time dashboard updates, send spans incrementally:
# When a step STARTS — send with status="running", no end_time:
httpx.post(f"{API}/api/v1/ingest/spans", json={"batch": [{
"span_id": span_id,
"trace_id": trace_id,
"name": "my_tool",
"span_type": "tool",
"start_time": datetime.now(UTC).isoformat(),
"end_time": None,
"status": "running",
...
}]}, headers=headers)
# When the step COMPLETES — send same span_id with status="ok":
httpx.post(f"{API}/api/v1/ingest/spans", json={"batch": [{
"span_id": span_id, # same span_id = upsert
"trace_id": trace_id,
"name": "my_tool",
"span_type": "tool",
"start_time": start.isoformat(),
"end_time": datetime.now(UTC).isoformat(),
"status": "ok",
"output": {"result": "..."},
...
}]}, headers=headers)
The API upserts on span_id. WebSocket broadcasts notify the dashboard to re-render live.
Tool Calls Format
"tool_calls": [
{
"tool_name": "web_search",
"arguments": {"query": "AI safety 2026"},
"result": {"output": "Found 3 papers..."}, # MUST be dict or null
"selected_from": ["web_search", "calculator"],
}
]
The
resultfield must be a dict (e.g.,{"output": "..."}) ornull. Passing a plain string causes a422error.
Reading API Responses
All GET endpoints wrap responses in {"data": ...}:
resp = httpx.get(f"{API}/api/v1/traces/{trace_id}", headers=headers)
trace = resp.json()["data"] # NOT resp.json()
Option 3: LangChain Callback Handler
For LangGraph/LangChain agents, use the built-in callback handler:
cd apps/api && pip install -e ".[agent]"
from audittrail.callback_handler import AuditTrailCallbackHandler
from audittrail.traceable import init_tracer, start_trace, shutdown_tracer
await init_tracer()
async with start_trace("my-agent", "development") as trace_id:
handler = AuditTrailCallbackHandler(trace_id=trace_id)
result = await graph.ainvoke(
{"messages": [HumanMessage(content="...")]},
config={"callbacks": [handler]},
)
await shutdown_tracer()
Common Pitfalls
| Issue | Cause | Fix |
|---|---|---|
| 422 on span ingestion | tool_calls[].result is a string | Use {"output": "..."} not "..." |
| Empty trace data | Reading resp.json() | Use resp.json()["data"] |
| Traces not showing | No API key or wrong key | Check AUDITTRAIL_API_KEY env var |
| Flat DAG | All spans same parent | Create intermediate chain/agent spans |
| Trace stuck on "Running" | Root span not completed | Send final send_complete() for root |
| 401 on all requests | Session expired | Re-login or use API key |
| Cost shows $0.00 | No model field in span | Send model="gpt-4o-mini" with tokens |
| Cost shows "Default Rate" | Model not in pricing.yaml | Add your model to rules/pricing.yaml |
Demo Mode
Click View Demo on the landing page to explore AuditTrail with sample data (no account needed).
Manual Development Setup
For development without Docker:
Backend
cd apps/api
cp ../../.env.example ../../.env # Edit: set AUDITTRAIL_SECRET_KEY
pip install -e ".[dev]"
uvicorn audittrail.main:app --reload --port 8000
Frontend
cd apps/web
npm install
npm run dev
Environment Variables
| Variable | Default | Description |
|---|---|---|
AUDITTRAIL_SECRET_KEY | dev-secret-... | JWT signing secret. Change in production. |
AUDITTRAIL_DEBUG | false | Verbose logging |
AUDITTRAIL_RULES_DIR | ./rules | Constitutional rule YAML directory |
AUDITTRAIL_CORS_ORIGINS | ["http://localhost:3000"] | Allowed CORS origins |
DATABASE_URL | sqlite+aiosqlite:///./data/audittrail.db | Database connection |
OPENAI_API_KEY | (empty) | For agents and ablation |
NEXT_PUBLIC_API_URL | http://localhost:8000/api | Frontend API base URL |
NEXT_PUBLIC_WS_URL | ws://localhost:8000/ws | Frontend WebSocket URL |
Project Structure
AuditTrailCodebase/
apps/
api/ # FastAPI backend (Python 3.12)
web/ # Next.js frontend (TypeScript)
rules/ # Constitutional rule YAML files
examples/
langgraph-agent/ # Real LangGraph agent + template
agent.py # Full 7-tool agent with live tracing
template.py # Minimal copy-paste template
requirements.txt
demo_happy_path.py # Mock demo scenarios (no API key needed)
demo_constitutional.py
demo_debugging.py
docs/ # Documentation
docker-compose.yml # Production compose