Bundled together they form a trace – a structured replay of what happened, step‑by‑step.
| Event | When to use |
|---|---|
trace | The root container for a whole conversation / agent run. |
llm | Start and end of every LLM call. |
tool | Calls to a function / external tool invoked by the model. |
retriever | RAG queries and the chunks they return. |
log | Anything else worth seeing while debugging (system prompts, branches, errors…). |
The full schema lives in API ▸ Ingestion.
Recommended workflow
- Collect events in memory as they happen.
- Flush once at the end (or on fatal error).
- Add a
logevent describing the error if things blow up. - Keep tracing async – never block your user.
- Evaluation‑only mode? Only ingest when the run came from an Avido test → check for
testIdfrom the Webhook. - LLM events should contain the raw prompt & completion – strip provider JSON wrappers.
Ingesting events
You can send events:- Directly via HTTP
- Via our SDKs (
avido)
When authenticating with an API key, include both the
x-api-key and x-application-id
headers. The application ID should match the application that owns the key so the
request can be authorized.Tip: map your IDs
If you already track a conversation / run in your own DB, pass that same ID asreferenceId.It makes liftover between your system and Avido effortless.
Next steps
- Inspect traces in Traces inside the dashboard.