Capture every step of your LLM workflow and send it to Avido for replay, evaluation, and monitoring.
When your chatbot conversation or agent run is in flight, every action becomes an event.
Bundled together they form a trace – a structured replay of what happened, step‑by‑step.
Event | When to use |
---|---|
trace | The root container for a whole conversation / agent run. |
llm | Start and end of every LLM call. |
tool | Calls to a function / external tool invoked by the model. |
retriever | RAG queries and the chunks they return. |
log | Anything else worth seeing while debugging (system prompts, branches, errors…). |
The full schema lives in API ▸ Ingestion.
log
event describing the error if things blow up.testId
from the Webhook.You can send events:
avido
)If you already track a conversation / run in your own DB, pass that same ID as referenceId
.
It makes liftover between your system and Avido effortless.
Need more examples or have a tricky edge case? Contact us and we’ll expand the docs! 🎯
Capture every step of your LLM workflow and send it to Avido for replay, evaluation, and monitoring.
When your chatbot conversation or agent run is in flight, every action becomes an event.
Bundled together they form a trace – a structured replay of what happened, step‑by‑step.
Event | When to use |
---|---|
trace | The root container for a whole conversation / agent run. |
llm | Start and end of every LLM call. |
tool | Calls to a function / external tool invoked by the model. |
retriever | RAG queries and the chunks they return. |
log | Anything else worth seeing while debugging (system prompts, branches, errors…). |
The full schema lives in API ▸ Ingestion.
log
event describing the error if things blow up.testId
from the Webhook.You can send events:
avido
)If you already track a conversation / run in your own DB, pass that same ID as referenceId
.
It makes liftover between your system and Avido effortless.
Need more examples or have a tricky edge case? Contact us and we’ll expand the docs! 🎯