SDK & Integration
The Aberon Python SDK instruments your AI agents with 3 lines of code. Works with LangChain, CrewAI, AutoGen, LlamaIndex, or any custom agent.
Installation
From the distribution archive (air-gap safe)
pip install sdk/aberon-*.whl
From source
cd sdk && pip install .
Authentication
Create an API key in the dashboard: Settings → API Keys → Create.
import aberon
client = aberon.Client(
endpoint="http://<your-server>:3000",
api_key="aberon_sk_...",
) Environment variable alternative:
export ABERON_URL=http://<your-server>:3000 export ABERON_API_KEY=aberon_sk_...
client = aberon.Client() # reads from env
Register an Agent
agent = client.register(
name="my-agent",
framework="langchain", # langchain, crewai, autogen, custom
model="gpt-4o", # optional
project="my-project", # optional, for grouping
environment="production", # optional
capture_mode="full", # full | redacted | metadata_only
) register() is idempotent — if the agent already exists (matched by external_ref or name), it returns the existing one.
Capture modes
| Mode | What's stored | Use case |
|---|---|---|
full | Everything: prompts, responses, PII | Internal debugging |
redacted | PII automatically masked by Presidio | Production with compliance |
metadata_only | Only metrics: tokens, cost, latency | Maximum privacy |
Create Traces and Spans
with agent.trace() as t:
with t.span("preprocess", kind="other") as s:
s.set_input({"query": "What is Aberon?"})
result = preprocess(query)
s.set_output({"preprocessed": True})
with t.span("llm_call", kind="llm") as s:
response = call_llm(query)
s.set_input({"prompt": query})
s.set_output({"response": response})
s.set_tokens(input=120, output=85)
with t.span("postprocess", kind="other") as s:
final = format_response(response)
s.set_output(final)
t.set_cost(0.02) Span kinds
| Kind | Description | Examples |
|---|---|---|
llm | LLM API call | OpenAI, Anthropic, local model |
tool | Tool/function call | search_kb, create_ticket, execute_sql |
agent | Sub-agent invocation | Calling another agent |
guardrail | Guardrail check | PII scan, tool restriction check |
reasoning | Planning/reasoning step | ReAct, Chain-of-Thought |
other | Everything else | Preprocessing, formatting |
Multi-Agent Trace Linking
Link child agent traces to a parent trace:
# Parent agent
with coordinator.trace() as parent:
with parent.span("plan", kind="reasoning"):
subtasks = plan(query)
# Child agent — linked via parent_trace_id
with researcher.trace(parent_trace_id=parent.trace_id) as child:
with child.span("search", kind="tool"):
results = search(subtasks[0])
with child.span("summarize", kind="llm"):
summary = summarize(results)
with parent.span("synthesize", kind="llm"):
final = synthesize(summary) Dashboard shows the full chain: coordinator → researcher, with waterfall timeline.
@traced_function Decorator
Auto-create spans for regular Python functions:
from aberon import traced_function
@traced_function(kind="tool")
def fetch_documents(query: str) -> list[str]:
return search_engine.query(query)
@traced_function(kind="llm", name="generate_answer")
def call_llm(prompt: str, docs: list[str]) -> str:
return openai.chat(prompt, context=docs)
with agent.traced_context() as ctx:
docs = fetch_documents("Aberon") # auto-creates "fetch_documents" span
answer = call_llm("How?", docs) # auto-creates "generate_answer" span Guardrail Checks
Check policies before executing a tool. See policy types and configuration for details on setting up policies.
result = agent.check_guardrails(
input_text="Send email to john@example.com",
tool_name="send_email",
trace_id=t.trace_id, # optional, for linking
)
if result.allowed:
send_email(...)
elif result.blocked_by:
print(f"Blocked: {result.blocked_by[0].reason}")
elif result.requires_approval:
# Human approval needed — wait or skip
from aberon.approval import PendingApproval
pending = PendingApproval(client._transport, result.requires_approval)
try:
decision = pending.wait(timeout=120, poll_interval=3)
send_email(...) # approved
except aberon.ApprovalDeniedError:
print("Denied by human reviewer") Fail Modes
client = aberon.Client(
endpoint="http://aberon:3000",
api_key="aberon_sk_...",
fail_mode="open", # open | closed
) | Mode | Behavior when Aberon is unreachable |
|---|---|
open (default) | Agent continues working. Traces are lost. |
closed | Agent raises exception. Use for critical compliance. |
See what if Aberon is unreachable for debugging connectivity issues.
Examples
The distribution archive includes ready-to-run examples:
| File | What it demonstrates |
|---|---|
examples/quickstart.py | Basic: register agent, create trace with spans, guardrail check |
examples/sub_agent.py | Parent-child agent hierarchy, trace linking |
examples/approval_flow.py | Guardrail → approval required → wait for human decision |
examples/traced_decorator.py | @traced_function decorator for auto-span creation |