Skip to main content
Already using OpenTelemetry or a framework with built-in tracing (Vercel AI SDK, LangChain, LlamaIndex)? You can send traces straight to Avido — no custom integration code required. Avido accepts standard OTLP JSON payloads and automatically maps OpenInference span attributes into its trace model.
This feature is in beta. We’d love your feedback — reach out if you run into anything.

Quick setup

Point your OpenTelemetry exporter at Avido by setting three environment variables:
OTEL_EXPORTER_OTLP_PROTOCOL="http/json"
OTEL_EXPORTER_OTLP_ENDPOINT="https://api.avidoai.com/v0/otel/traces"
OTEL_EXPORTER_OTLP_HEADERS="x-application-id=<application-id>,x-api-key=<api-key>"
VariableDescription
OTEL_EXPORTER_OTLP_PROTOCOLMust be http/json. Avido does not support gRPC or protobuf.
OTEL_EXPORTER_OTLP_ENDPOINThttps://api.avidoai.com/v0/otel/traces
OTEL_EXPORTER_OTLP_HEADERSYour Avido x-application-id and x-api-key, comma-separated.
You can find your Application ID and API key in the Avido dashboard under Settings > API Keys.

Sending traces

Once your exporter is configured, traces are sent automatically by your instrumentation library. You can also send an OTLP payload manually:
cURL
curl -X POST https://api.avidoai.com/v0/otel/traces \
  -H "Content-Type: application/json" \
  -H "x-application-id: <application-id>" \
  -H "x-api-key: <api-key>" \
  -d '{
  "resourceSpans": [
    {
      "scopeSpans": [
        {
          "spans": [
            {
              "traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
              "spanId": "00f067aa0ba902b7",
              "name": "llm.generate",
              "startTimeUnixNano": "1737052800000000000",
              "endTimeUnixNano": "1737052800500000000",
              "attributes": [
                {
                  "key": "openinference.span.kind",
                  "value": { "stringValue": "LLM" }
                },
                {
                  "key": "llm.model_name",
                  "value": { "stringValue": "gpt-4o-2024-08-06" }
                },
                {
                  "key": "input.value",
                  "value": { "stringValue": "Tell me a joke." }
                },
                {
                  "key": "output.value",
                  "value": { "stringValue": "Why did the chicken cross the road?" }
                },
                {
                  "key": "llm.token_count.prompt",
                  "value": { "intValue": 12 }
                },
                {
                  "key": "llm.token_count.completion",
                  "value": { "intValue": 18 }
                }
              ]
            }
          ]
        }
      ]
    }
  ]
}'
A successful request returns the created trace and step IDs — the same response shape as the /v0/ingest endpoint.

How spans are mapped

Avido reads the openinference.span.kind attribute on each span and converts it into the matching Avido step type:
OpenInference span kindAvido step typeWhat gets extracted
LLMllmModel, input/output messages, token usage
TOOLtoolTool name, parameters, output
RETRIEVER / RERANKERretrieverQuery, retrieved documents
CHAIN, EMBEDDING, AGENT, GUARDRAIL, EVALUATORlogName and metadata
Spans without a recognised openinference.span.kind are stored as log steps so nothing is lost.

Attribute reference

The table below lists the OpenInference attributes Avido extracts. Any attributes not listed here are preserved in the step’s metadata field.

LLM spans

AttributeMapped to
llm.model_nameModel ID
input.valueInput
output.valueOutput
llm.input_messagesInput (preferred over input.value)
llm.output_messagesOutput (preferred over output.value)
llm.token_count.promptPrompt token count
llm.token_count.completionCompletion token count

Tool spans

AttributeMapped to
tool.nameStep name
tool.parametersTool input
tool.outputTool output
tool_call.function.nameStep name (fallback)
tool_call.function.argumentsTool input (fallback)

Retriever spans

AttributeMapped to
retrieval.queryQuery
retrieval.documentsResult

Common attributes

AttributeMapped to
session.idTrace reference ID (links conversations)
avido.test.idTest ID (connects the trace to an Avido test run)
Linking test runs: avido.test.id is a custom Avido attribute — it is not part of the OpenInference spec. If you’re running Avido tests via webhooks, set this span attribute to the testId from the webhook payload so the trace is automatically connected to the test run and evaluation results are linked.

Trace structure

Each OTLP batch creates one trace in Avido:
  • If a root span (no parentSpanId) is present, it becomes the trace container. Its session.id attribute is used as the trace’s referenceId.
  • If no root span exists, the first span in the batch is used.
  • All spans become steps nested under the trace, preserving parent-child relationships via parentSpanId.
  • Timing fields (startTimeUnixNano, endTimeUnixNano) are stored as step timestamps with millisecond duration.

Vercel AI SDK

If you’re using the Vercel AI SDK, Avido also recognises its telemetry attributes as fallbacks:
Vercel AI SDK attributeMapped to
ai.response.modelModel ID (highest priority)
ai.model.idModel ID (fallback)
ai.response.textOutput (fallback)
ai.usage.promptTokensPrompt token count (fallback)
ai.usage.completionTokensCompletion token count (fallback)

Next steps

Need help wiring up your stack? Contact us and we’ll help you get connected.