Already using OpenTelemetry or a framework with built-in tracing (Vercel AI SDK, LangChain, LlamaIndex)?
You can send traces straight to Avido — no custom integration code required.
Avido accepts standard OTLP JSON payloads and automatically maps
OpenInference span attributes
into its trace model.
This feature is in beta. We’d love your feedback — reach out if you run into anything.
Quick setup
Point your OpenTelemetry exporter at Avido by setting three environment variables:
OTEL_EXPORTER_OTLP_PROTOCOL="http/json"
OTEL_EXPORTER_OTLP_ENDPOINT="https://api.avidoai.com/v0/otel/traces"
OTEL_EXPORTER_OTLP_HEADERS="x-application-id=<application-id>,x-api-key=<api-key>"
| Variable | Description |
|---|
OTEL_EXPORTER_OTLP_PROTOCOL | Must be http/json. Avido does not support gRPC or protobuf. |
OTEL_EXPORTER_OTLP_ENDPOINT | https://api.avidoai.com/v0/otel/traces |
OTEL_EXPORTER_OTLP_HEADERS | Your Avido x-application-id and x-api-key, comma-separated. |
You can find your Application ID and API key in the Avido dashboard under Settings > API Keys.
Sending traces
Once your exporter is configured, traces are sent automatically by your instrumentation library.
You can also send an OTLP payload manually:
curl -X POST https://api.avidoai.com/v0/otel/traces \
-H "Content-Type: application/json" \
-H "x-application-id: <application-id>" \
-H "x-api-key: <api-key>" \
-d '{
"resourceSpans": [
{
"scopeSpans": [
{
"spans": [
{
"traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
"spanId": "00f067aa0ba902b7",
"name": "llm.generate",
"startTimeUnixNano": "1737052800000000000",
"endTimeUnixNano": "1737052800500000000",
"attributes": [
{
"key": "openinference.span.kind",
"value": { "stringValue": "LLM" }
},
{
"key": "llm.model_name",
"value": { "stringValue": "gpt-4o-2024-08-06" }
},
{
"key": "input.value",
"value": { "stringValue": "Tell me a joke." }
},
{
"key": "output.value",
"value": { "stringValue": "Why did the chicken cross the road?" }
},
{
"key": "llm.token_count.prompt",
"value": { "intValue": 12 }
},
{
"key": "llm.token_count.completion",
"value": { "intValue": 18 }
}
]
}
]
}
]
}
]
}'
A successful request returns the created trace and step IDs — the same response shape as the /v0/ingest endpoint.
How spans are mapped
Avido reads the openinference.span.kind attribute on each span and converts it into the
matching Avido step type:
| OpenInference span kind | Avido step type | What gets extracted |
|---|
LLM | llm | Model, input/output messages, token usage |
TOOL | tool | Tool name, parameters, output |
RETRIEVER / RERANKER | retriever | Query, retrieved documents |
CHAIN, EMBEDDING, AGENT, GUARDRAIL, EVALUATOR | log | Name and metadata |
Spans without a recognised openinference.span.kind are stored as log steps so nothing is lost.
Attribute reference
The table below lists the OpenInference attributes Avido extracts. Any attributes not listed here
are preserved in the step’s metadata field.
LLM spans
| Attribute | Mapped to |
|---|
llm.model_name | Model ID |
input.value | Input |
output.value | Output |
llm.input_messages | Input (preferred over input.value) |
llm.output_messages | Output (preferred over output.value) |
llm.token_count.prompt | Prompt token count |
llm.token_count.completion | Completion token count |
| Attribute | Mapped to |
|---|
tool.name | Step name |
tool.parameters | Tool input |
tool.output | Tool output |
tool_call.function.name | Step name (fallback) |
tool_call.function.arguments | Tool input (fallback) |
Retriever spans
| Attribute | Mapped to |
|---|
retrieval.query | Query |
retrieval.documents | Result |
Common attributes
| Attribute | Mapped to |
|---|
session.id | Trace reference ID (links conversations) |
avido.test.id | Test ID (connects the trace to an Avido test run) |
Linking test runs: avido.test.id is a custom Avido attribute — it is not part of
the OpenInference spec. If you’re running Avido tests via webhooks, set this
span attribute to the testId from the webhook payload so the trace is automatically
connected to the test run and evaluation results are linked.
Trace structure
Each OTLP batch creates one trace in Avido:
- If a root span (no
parentSpanId) is present, it becomes the trace container.
Its session.id attribute is used as the trace’s referenceId.
- If no root span exists, the first span in the batch is used.
- All spans become steps nested under the trace, preserving parent-child relationships
via
parentSpanId.
- Timing fields (
startTimeUnixNano, endTimeUnixNano) are stored as step timestamps with
millisecond duration.
Vercel AI SDK
If you’re using the Vercel AI SDK, Avido also recognises its telemetry attributes as fallbacks:
| Vercel AI SDK attribute | Mapped to |
|---|
ai.response.model | Model ID (highest priority) |
ai.model.id | Model ID (fallback) |
ai.response.text | Output (fallback) |
ai.usage.promptTokens | Prompt token count (fallback) |
ai.usage.completionTokens | Completion token count (fallback) |
Next steps
Need help wiring up your stack? Contact us and we’ll help you get connected.