# Create a new annotation Source: https://docs.avidoai.com/api-reference/annotations/create-a-new-annotation openapi.json post /v0/annotations Creates a new annotation. # Delete an annotation Source: https://docs.avidoai.com/api-reference/annotations/delete-an-annotation openapi.json delete /v0/annotations/{id} Deletes an existing annotation. # Get a single annotation by ID Source: https://docs.avidoai.com/api-reference/annotations/get-a-single-annotation-by-id openapi.json get /v0/annotations/{id} Retrieves detailed information about a specific annotation. # List annotations Source: https://docs.avidoai.com/api-reference/annotations/list-annotations openapi.json get /v0/annotations Retrieves a paginated list of annotations with optional filtering. # Update an annotation Source: https://docs.avidoai.com/api-reference/annotations/update-an-annotation openapi.json put /v0/annotations/{id} Updates an existing annotation. # Create a new application Source: https://docs.avidoai.com/api-reference/applications/create-a-new-application openapi.json post /v0/applications Creates a new application configuration. # Get a single application by ID Source: https://docs.avidoai.com/api-reference/applications/get-a-single-application-by-id openapi.json get /v0/applications/{id} Retrieves detailed information about a specific application. # List applications Source: https://docs.avidoai.com/api-reference/applications/list-applications openapi.json get /v0/applications Retrieves a paginated list of applications with optional filtering. # List document chunks Source: https://docs.avidoai.com/api-reference/document-chunks/list-document-chunks openapi.json get /v0/documents/chunked Retrieves a paginated list of document chunks with optional filtering by document ID. # Get documents in a document folder Source: https://docs.avidoai.com/api-reference/document-folder-documents/get-documents-in-a-document-folder openapi.json get /v0/documents/folders/{id}/documents Retrieves a paginated list of documents that belong to a specific document folder based on tag rules. # Create a new document folder Source: https://docs.avidoai.com/api-reference/document-folders/create-a-new-document-folder openapi.json post /v0/documents/folders Creates a new document folder with tag-based membership rules. Documents matching the specified tags will automatically be included. # Delete a document folder Source: https://docs.avidoai.com/api-reference/document-folders/delete-a-document-folder openapi.json delete /v0/documents/folders/{id} Deletes a document folder by ID. This does not affect the documents, only the folder organization. # Get a single document folder by ID Source: https://docs.avidoai.com/api-reference/document-folders/get-a-single-document-folder-by-id openapi.json get /v0/documents/folders/{id} Retrieves detailed information about a specific document folder, including document count. # List document folders Source: https://docs.avidoai.com/api-reference/document-folders/list-document-folders openapi.json get /v0/documents/folders Retrieves a paginated list of document folders with optional search filtering. # Update an existing document folder Source: https://docs.avidoai.com/api-reference/document-folders/update-an-existing-document-folder openapi.json put /v0/documents/folders/{id} Updates an existing document folder. Changing tag rules will trigger recalculation of folder membership. # Get tags for a document Source: https://docs.avidoai.com/api-reference/document-tags/get-tags-for-a-document openapi.json get /v0/documents/{id}/tags Retrieves all tags assigned to a specific document. # Update document tags Source: https://docs.avidoai.com/api-reference/document-tags/update-document-tags openapi.json put /v0/documents/{id}/tags Updates the tags assigned to a specific document. This replaces all existing tags. # Activate a specific version of a document Source: https://docs.avidoai.com/api-reference/document-versions/activate-a-specific-version-of-a-document openapi.json put /v0/documents/{id}/versions/{versionNumber}/activate Makes a specific version the active version of a document. This is the version that will be returned by default when fetching the document. # Create a new version of a document Source: https://docs.avidoai.com/api-reference/document-versions/create-a-new-version-of-a-document openapi.json post /v0/documents/{id}/versions Creates a new version of an existing document. The new version will have the next version number. # Get a specific version of a document Source: https://docs.avidoai.com/api-reference/document-versions/get-a-specific-version-of-a-document openapi.json get /v0/documents/{id}/versions/{versionNumber} Retrieves a specific version of a document by version number. # List all versions of a document Source: https://docs.avidoai.com/api-reference/document-versions/list-all-versions-of-a-document openapi.json get /v0/documents/{id}/versions Retrieves all versions of a specific document, ordered by version number descending. # Create a new document Source: https://docs.avidoai.com/api-reference/documents/create-a-new-document openapi.json post /v0/documents Creates a new document with the provided information. # Delete a document Source: https://docs.avidoai.com/api-reference/documents/delete-a-document openapi.json delete /v0/documents/{id} Deletes a document by ID. Note: This will also affect any child documents that reference this document as a parent. # Get a single document by ID Source: https://docs.avidoai.com/api-reference/documents/get-a-single-document-by-id openapi.json get /v0/documents/{id} Retrieves detailed information about a specific document, including its parent-child relationships and active version details. # List documents Source: https://docs.avidoai.com/api-reference/documents/list-documents openapi.json get /v0/documents Retrieves a paginated list of documents with optional filtering by status, assignee, parent, and other criteria. Only returns documents with active approved versions unless otherwise specified. # Get a single evaluation by ID Source: https://docs.avidoai.com/api-reference/evals/get-a-single-evaluation-by-id openapi.json get /v0/evals/{id} Retrieves detailed information about a specific evaluation. # List evaluations Source: https://docs.avidoai.com/api-reference/evals/list-evaluations openapi.json get /v0/evals Retrieves a paginated list of evaluations with optional filtering. # List tests Source: https://docs.avidoai.com/api-reference/evals/list-tests openapi.json get /v0/tests Retrieves a paginated list of tests with optional filtering. # Ingest events Source: https://docs.avidoai.com/api-reference/ingestion/ingest-events openapi.json post /v0/ingest Ingest an array of events (threads or traces) to store and process. # Create a new issue Source: https://docs.avidoai.com/api-reference/issues/create-a-new-issue openapi.json post /v0/issues Creates a new issue for tracking problems or improvements. # Delete an issue Source: https://docs.avidoai.com/api-reference/issues/delete-an-issue openapi.json delete /v0/issues/{id} Deletes an existing issue permanently. # Get a single issue by ID Source: https://docs.avidoai.com/api-reference/issues/get-a-single-issue-by-id openapi.json get /v0/issues/{id} Retrieves detailed information about a specific issue. # List issues Source: https://docs.avidoai.com/api-reference/issues/list-issues openapi.json get /v0/issues Retrieves a paginated list of issues with optional filtering by date range, status, priority, assignee, and more. # Update an issue Source: https://docs.avidoai.com/api-reference/issues/update-an-issue openapi.json put /v0/issues/{id} Updates an existing issue. Can be used to reassign, change status, update priority, or modify any other issue fields. # Get a single run by ID Source: https://docs.avidoai.com/api-reference/runs/get-a-single-run-by-id openapi.json get /v0/runs/{id} Retrieves detailed information about a specific run. # List runs Source: https://docs.avidoai.com/api-reference/runs/list-runs openapi.json get /v0/runs Retrieves a paginated list of runs with optional filtering. # Create a new style guide Source: https://docs.avidoai.com/api-reference/style-guides/create-a-new-style-guide openapi.json post /v0/style-guides Creates a new style guide. # Get a single style guide by ID Source: https://docs.avidoai.com/api-reference/style-guides/get-a-single-style-guide-by-id openapi.json get /v0/style-guides/{id} Retrieves detailed information about a specific style guide. # List style guides Source: https://docs.avidoai.com/api-reference/style-guides/list-style-guides openapi.json get /v0/style-guides Retrieves a paginated list of style guides with optional filtering. # Create a new tag Source: https://docs.avidoai.com/api-reference/tags/create-a-new-tag openapi.json post /v0/tags Creates a new tag with the provided information. # Delete a tag Source: https://docs.avidoai.com/api-reference/tags/delete-a-tag openapi.json delete /v0/tags/{id} Deletes a tag by ID. This will also remove the tag from all documents. # Get a single tag by ID Source: https://docs.avidoai.com/api-reference/tags/get-a-single-tag-by-id openapi.json get /v0/tags/{id} Retrieves detailed information about a specific tag. # List tags Source: https://docs.avidoai.com/api-reference/tags/list-tags openapi.json get /v0/tags Retrieves a paginated list of tags with optional search filtering. # Update an existing tag Source: https://docs.avidoai.com/api-reference/tags/update-an-existing-tag openapi.json put /v0/tags/{id} Updates an existing tag with the provided information. # Create a new task Source: https://docs.avidoai.com/api-reference/tasks/create-a-new-task openapi.json post /v0/tasks Creates a new task. # Get a single task by ID Source: https://docs.avidoai.com/api-reference/tasks/get-a-single-task-by-id openapi.json get /v0/tasks/{id} Retrieves detailed information about a specific task. # List tasks Source: https://docs.avidoai.com/api-reference/tasks/list-tasks openapi.json get /v0/tasks Retrieves a paginated list of tasks with optional filtering. # Run a task Source: https://docs.avidoai.com/api-reference/tasks/run-a-task openapi.json post /v0/tasks/trigger Triggers the execution of a task. # Update an existing task Source: https://docs.avidoai.com/api-reference/tasks/update-an-existing-task openapi.json put /v0/tasks/{id} Updates an existing task with the provided information. # Get a single test by ID Source: https://docs.avidoai.com/api-reference/tests/get-a-single-test-by-id openapi.json get /v0/tests/{id} Retrieves detailed information about a specific test. # Get a single trace by ID Source: https://docs.avidoai.com/api-reference/threads/get-a-single-trace-by-id openapi.json get /v0/traces/{id} Retrieves detailed information about a specific trace. # List Traces Source: https://docs.avidoai.com/api-reference/threads/list-traces openapi.json get /v0/traces Retrieve threads with associated traces, filtered by application ID and optional date parameters. # Create a new topic Source: https://docs.avidoai.com/api-reference/topics/create-a-new-topic openapi.json post /v0/topics Creates a new topic. # Get a single topic by ID Source: https://docs.avidoai.com/api-reference/topics/get-a-single-topic-by-id openapi.json get /v0/topics/{id} Retrieves detailed information about a specific topic. # List topics Source: https://docs.avidoai.com/api-reference/topics/list-topics openapi.json get /v0/topics Retrieves a paginated list of topics with optional filtering. # Validate an incoming webhook request Source: https://docs.avidoai.com/api-reference/webhook/validate-an-incoming-webhook-request openapi.json post /v0/validate-webhook Checks the body (including timestamp and signature) against the configured webhook secret. Returns `{ valid: true }` if the signature is valid. # Changelog Source: https://docs.avidoai.com/changelog Track product releases and improvements across Avido versions. v0.2.0 update Avido v0.2.0 introduces Document Versioning and Knowledge Base Testingpowerful features that enable teams to maintain stable production content while continuously improving their AI knowledge bases. This release empowers organizations to collaborate on documentation updates without risking production stability, while systematically identifying and fixing knowledge gaps. ## Document Versioning Document versioning provides comprehensive version control for your AI knowledge base, ensuring production stability while enabling continuous improvement. Teams can now iterate on content safely, with clear workflows that separate work-in-progress from production-ready documentation. ### Key Capabilities * **Four-State Version Lifecycle**: Documents support Approved (production), Draft (work-in-progress), Review (pending approval), and Archived (historical) states * **Production Stability by Default**: APIs and AI responses use only approved versions unless explicitly requested otherwise * **Collaborative Workflows**: Multiple team members can work on drafts simultaneously with version notes and clear approval processes * **Complete Audit Trail**: Track who made changes, when, and whycritical for compliance requirements ### Why We Built This Document Versioning and Knowledge Base Testing work together to create a robust content management system that balances stability with continuous improvement. Teams in regulated industries now have the tools to maintain high-quality, evolving documentation that powers AI systemswith the confidence that changes won't compromise production stability or compliance requirements. **For Content Teams**: Safely iterate on documentation without affecting production systems. Create drafts from any document version, collaborate with teammates, and deploy updates only when ready. **For Engineering Teams**: Maintain API stability while content evolves. Production systems automatically use only approved content, with optional access to draft versions for testing. **For Compliance Teams**: Full version history with user attribution meets regulatory requirements. Track every change with clear audit trails and approval workflows. ## Knowledge Base Testing Systematic testing ensures your knowledge base remains comprehensive and consistent. Two new test types help identify gaps and conflicts before they impact your AI applications. ### Document Coverage Test Automatically identifies gaps in your knowledge base by testing how well your documents cover defined tasks. This ensures AI agents have the information needed to handle all scenarios effectively. ### Overlap and Contradictions Analysis Tests your knowledge base using the Overlap and Contradictions test to identify: * **Overlapping Information**: Find redundant content across documents * **Contradictory Instructions**: Detect conflicting guidance that could confuse AI agents v0.1.0 update Avido v0.1.0 introduces the System Journal—an intelligent monitoring system that automatically tracks and documents all significant changes in your AI deployments. This release also includes performance improvements and stability fixes to make Avido more reliable in production. The System Journal acts as a "black box recorder" for your AI systems, capturing every meaningful change that could impact model behavior, compliance, or evaluation accuracy. Small changes to AI configurations can cause unexpected regressions—a model version bump, temperature adjustment, or prompt tweak can break functionality in unexpected ways. ### Key Features * **Automatic Change Detection**: Identifies when model parameters (temperature, max tokens, top-k, etc.) change between deployments * **Missing Parameter Alerts**: Flags when critical parameters required for evaluation or compliance are absent * **Intelligent Journal Entries**: Generates human-readable descriptions of what changed, when, and by how much * **Complete Audit Trail**: Maintains a tamper-proof history for regulatory compliance * **Zero Manual Overhead**: Operates completely automatically in the background ### Why We Built This **For Engineering Teams**: Prevent configuration drift, accelerate incident response, and maintain evaluation integrity. Know immediately when model parameters change unexpectedly.\ **For Compliance Teams**: Achieve regulatory readiness with comprehensive audit trails, track risk-relevant parameter changes, and reduce compliance reporting time from weeks to minutes. ## Improvements and Fixes * Manage multiple evaluations per task simultaneously * Schedule evaluations with custom criticality levels and intervals * System journal entries are now visible in dashboard graphs * Performance optimizations for faster response times * Enhanced security for production deployments v0.0.5 update Optimise your articles for RAG straight in Avido – use our best practices to process your original knowledge base, help site or similar into split, optimised articles with proper metadata, ready to ingest into RAG. Much more to come! ### 📖 Recall (RAG Evaluation) The **Recall** feature in Avido provides a comprehensive way to assess how well your AI application's Retrieval-Augmented Generation (RAG) system is performing. * Measure key aspects of quality, correctness, and relevancy within your RAG workflow. * **No-code interface** empowers both technical and non-technical stakeholders to interpret metrics. * Ensure systems meet required quality standards before production. ### 🛠️ SDK & Trace Improvements * Micro-second precision when ingesting data. * Group traces to visualise workflow structure at a glance. ### ☁️ OpenAI on Azure Support * EU customers can now run all inference on models hosted in Europe. * Regardless of geography, we, or any of our providers, never train on any data. ### 🐞 Bug Fixes & Polishing Lots of improvements and paper cuts to make your experience with Avido even smoother, faster, and enjoyable. v0.0.4 update We're excited to announce the latest product updates for Avido. Our newest features make it easier and safer to deploy Generative AI, providing peace-of-mind for critical applications. ### 🔍 Enhanced Test View * Easily dive into each evaluation to pinpoint exactly what's working—and what needs improvement. * Clearly understand AI performance to rapidly iterate and optimize. ### 📌 System Journal * Track application changes seamlessly and visualize how these updates impact individual eval performance. * Stay informed and make confident deployment decisions with clear version tracking. ### 🔐 Single Sign-On (SSO) * Support for all major identity providers, making it even easier to roll out Avido in enterprises. ### ⚙️ Custom Evaluations * Create custom evals directly from our UI or via API. * Test specific business logic, compliance requirements, brand-specific wording, and other critical aspects of your application, ensuring unmatched accuracy and reliability. With these updates, Avido continues to empower financial services by ensuring safe, transparent, and high-quality deployment of Generative AI. v0.0.3 update We're thrilled to share our first public changelog, marking a step forward in our commitment to enhancing the understanding of AI applications and helping enterprises maximize the value of AI with Avido. ### 🚀 Quickstart Workflow * Upload existing outputs via CSV to automatically generate evaluation cases * Smart AI-powered categorization of topics and tasks * Interactive review interface for selecting benchmark outputs * Automated evaluation criteria generation based on selected examples ### 📊 Improved Scoring System * Simplified scoring scale (1-5) for more intuitive evaluation * Updated benchmarking system for better quality assessment * Refined evaluation criteria for clearer quality metrics ### 🤖 Smart Analysis * Automatic topic detection from output patterns * Task identification based on user intentions * Intelligent grouping of similar outputs * Automated quality scoring of historical outputs ### 💡 Enhanced Review Experience * Visual topic distribution analysis * Side-by-side conversation comparison * Guided selection of benchmark outputs * Contextual feedback collection for evaluation criteria # Documents Source: https://docs.avidoai.com/documents Knowledge management system for creating, versioning, and optimizing RAG-ready content ![Avido Documents](https://docs.avidoai.com/images/documents.jpg) The Documents tool allows you to easily format and split your content into RAG-ready documents. It's your central knowledge base where teams can collaborate on creating, refining, and approving the content that powers your AI application. Whether you're building customer support bots, internal knowledge assistants, or any RAG-based system, Documents ensures your content is properly structured, versioned, and optimized for AI retrieval. ## What are Documents? Documents in Avido are structured content pieces designed specifically for Retrieval-Augmented Generation (RAG) systems. Unlike traditional document management, Avido Documents are: * **AI-Optimized**: Automatically chunked and formatted for optimal retrieval * **Version-Controlled**: Maintain approved versions in production while working on improvements * **Collaborative**: Multiple team members can work on drafts without affecting live content * **Traceable**: Every change is tracked for compliance and quality control ## Key Features ### Document Creation & Import Documents can be created in multiple ways: * **Manual Creation**: Write and format content directly in the Avido editor * **Web Scraping**: Import content from any public URL * **File Upload**: Upload existing documents (coming soon) * **API Integration**: Programmatically create and manage documents ### AI-Powered Optimization The platform includes intelligent document optimization that: * **Reformats for RAG**: Structures content for better chunking and retrieval * **Improves Clarity**: Enhances readability while preserving meaning * **Maintains Consistency**: Ensures uniform formatting across your knowledge base * **Preserves Intent**: Keeps the original message and tone intact ### Version Management Every document supports comprehensive versioning: #### Version States * **APPROVED**: Live production version served by APIs * **DRAFT**: Work-in-progress version for collaboration * **REVIEW**: Pending approval from designated reviewers * **ARCHIVED**: Historical versions for reference #### Version Workflow 1. Create new versions from any existing version 2. Collaborate on drafts without affecting production 3. Submit for review when ready 4. Approve to make it the live version 5. Previous approved versions are automatically archived ### Approval Workflow Documents can require approval before going live: * **Assign Reviewers**: Designate who needs to approve changes * **Email Notifications**: Reviewers are notified when approval is needed * **Audit Trail**: Track who approved what and when * **Compliance Ready**: Meet regulatory requirements for content control ## Using Documents ### Creating Your First Document 1. Navigate to the **Documents** section in your Avido dashboard 2. Click **New Document** 3. Choose your creation method: * **Write**: Start with the built-in editor * **Import from URL**: Scrape content from a website * **Upload**: Import existing files (if enabled) ### Document Editor The editor provides a rich set of formatting tools: * **Markdown Support**: Write in markdown for quick formatting * **Visual Editor**: Use the toolbar for formatting without markdown knowledge * **Preview Mode**: See how your document will appear to users * **Auto-Save**: Never lose your work with automatic saving ### Working with Versions #### Creating a New Version 1. Open any document 2. Click **Create New Version** in the version sidebar 3. Add version notes describing your changes 4. Edit the content as needed 5. Save as draft or submit for review #### Version History Sidebar The sidebar shows: * All versions with their status badges * Creator and approval information * Version notes and timestamps * Quick actions for each version #### Comparing Versions 1. Select two versions to compare 2. View side-by-side differences 3. See what was added, removed, or changed 4. Understand the evolution of your content ### Document Optimization Use AI to improve your documents: 1. Open any document 2. Click **Optimize Document** 3. Review the AI-suggested improvements 4. Accept, reject, or modify suggestions 5. Save the optimized version The optimizer helps with: * Breaking content into logical sections * Improving readability and clarity * Standardizing formatting * Enhancing retrieval effectiveness ### API Access Documents are accessible via the Avido API: ```bash # Get all approved documents GET /v0/documents Authorization: Bearer YOUR_API_KEY # Get a specific document GET /v0/documents/{id} # Include draft versions GET /v0/documents?include_drafts=true # Create a new document POST /v0/documents Content-Type: application/json { "title": "Product FAQ", "content": "# Frequently Asked Questions...", "metadata": { "category": "support", "product": "main" } } ``` ### Integration with Testing Documents integrate seamlessly with Avido's testing framework: * **Knowledge Coverage**: Test if your documents cover all required topics * **MECE Analysis**: Ensure content is Mutually Exclusive and Collectively Exhaustive * **Task Mapping**: Verify documents address all user tasks Tests automatically use approved versions unless configured otherwise. ## Best Practices ### Content Organization * **Use Clear Titles**: Make documents easily discoverable * **Add Metadata**: Tag documents with categories, products, or teams * **Structure Hierarchically**: Use headings to create logical sections * **Keep Focused**: One topic per document for better retrieval ### Version Management * **Document Changes**: Always add clear version notes * **Review Before Approval**: Have subject matter experts review changes * **Test Before Production**: Run coverage tests on new versions * **Archive Strategically**: Keep important historical versions accessible ### Collaboration * **Assign Ownership**: Each document should have a clear owner * **Use Draft Status**: Work on improvements without affecting production * **Communicate Changes**: Notify stakeholders of significant updates * **Regular Reviews**: Schedule periodic content audits ### RAG Optimization * **Chunk-Friendly Content**: Write in digestible sections * **Avoid Redundancy**: Don't duplicate information across documents * **Use Examples**: Include concrete examples for better context * **Update Regularly**: Keep content current and accurate ## Document Lifecycle ### 1. Creation Phase * Identify knowledge gaps * Create initial content * Format for readability * Add relevant metadata ### 2. Optimization Phase * Run AI optimization * Test with knowledge coverage * Refine based on feedback * Ensure completeness ### 3. Review Phase * Submit for approval * Gather stakeholder feedback * Make necessary revisions * Document decisions ### 4. Production Phase * Approve for production use * Monitor retrieval performance * Track usage in traces * Gather user feedback ### 5. Maintenance Phase * Regular content audits * Update outdated information * Create new versions as needed * Archive obsolete content ## Advanced Features ### Traceability When documents are used in AI responses, Avido tracks: * Which documents were retrieved * How they influenced the response This creates a feedback loop for continuous improvement. ## Getting Started 1. **Define Your Knowledge Base**: Identify what content your AI needs 2. **Create Initial Documents**: Start with your most critical content 3. **Optimize and Test**: Use AI optimization and run coverage tests 4. **Review and Approve**: Get stakeholder sign-off 5. **Monitor and Iterate**: Track usage and improve based on feedback Documents transform static knowledge into dynamic, AI-ready content that evolves with your application's needs, ensuring your AI always has access to accurate, approved, and optimized information. # Inbox Source: https://docs.avidoai.com/inbox Central hub for triaging and managing all AI application issues and regressions ![Avido Inbox](https://docs.avidoai.com/images/inbox.jpg) The Avido Inbox is where all issues and regressions in your AI application are gathered, triaged, and assigned. With the Inbox, you know where to start your day, and what issues to tackle first. It also allows for smooth and frictionless collaboration between stakeholders. A response is flagged as providing wrong answers? Assign it to the document owner. An answer about a feature is vague or contradictory? Assign the product manager who can clear it up. Technical error? Route it to the dev in charge. An AI application is never done, nor perfect. It's a continuous cycle of improvement through collaboration. This is what the Inbox solves. ## What is the Inbox? The Inbox is a real-time, multi-source queue for all high-priority platform issues. It consolidates test failures, System Journal alerts, customer reports, and API-submitted errors into a single, actionable queue. Every incoming item is automatically summarized, categorized, and prioritized before surfacing to users, who can rapidly triage and assign. This is a focused, high-signal triage and assignment tool—not a deep workbench or collaborative analysis surface. It's designed to help you quickly understand what's wrong and get it to the right person. ## Issues Issues are the core of the Inbox. Every regression, bug, potential hallucination, or actual error is processed and added into the Inbox as an issue. ### When Issues Appear Issues show up when: * **A test run by Avido fails**: Evaluation tests that don't meet their thresholds automatically create issues * **System Journal entries flag critical changes**: Configuration changes outside acceptable ranges trigger issues * **External API submissions**: Your application or third-party services call the Avido `/issues` endpoint * **Future**: Customer reports and AI Core alerts (planned) Before getting added to the Inbox, Avido's AI pre-processes each issue to make it actionable for your entire team. ### Pre-Processing Pipeline One of the core benefits of using Avido is the ability to work from the same truth, even when some stakeholders are technical, like engineers, and some are non-technical like SMEs. To enable this, Avido pre-processes all incoming issues before adding them to the Inbox. During processing, Avido's AI will: * **Summarize the issue**: Create both business and technical summaries * **Estimate impact**: Assess the expected effect this issue will have * **Assign priority**: Set criticality level (Critical, High, Medium, Low) * **Find similar issues**: Detect duplicates and related problems using AI embeddings * **Add metadata**: Include source, timestamps, and relevant context This allows you to focus on the errors that matter most and keeps everyone working from the same truth about what's happening and how it's affecting the application. Less coordination, faster time-to-fix. ## Using the Inbox ### Accessing Your Inbox Navigate to the **Inbox** section in your Avido dashboard. You'll see a real-time queue of all unresolved issues, sorted by priority and creation time. ### Understanding Issue Cards Each issue displays: * **Priority Badge**: Visual indicator of criticality * **Source**: Where the issue originated (Test, System Journal, API) * **Title**: Clear description of the problem * **Business Summary**: Plain-language explanation (default view) * **Technical Summary**: Detailed technical context (toggle to view) * **Metadata**: Creation time, affected systems, related traces * **Similar Issues**: Count of potentially related problems ### Core Actions #### Triage Workflow 1. **Review**: Scan the queue, starting with highest priority items 2. **Understand**: Read AI-generated summaries to grasp impact 3. **Act**: Take one of these actions: * **Assign**: Route to a specific user or team with optional notes * **Dismiss**: Mark as resolved or non-actionable with a reason * **Merge**: Combine with similar issues to reduce duplication #### Bulk Operations Select multiple issues using checkboxes to: * Assign all to one person * Dismiss multiple non-issues * Apply the same action to related problems ### Business vs Technical Views Every issue comes with two perspectives: * **Business Summary** (default): What happened and why it matters, in plain English * **Technical Summary**: Root cause details, stack traces, and technical context Toggle between views based on your needs and audience. ## The Issues API The `/issues` endpoint allows you to create issues and send them straight into the Avido Inbox from any application. ### Example Use Cases * **Catch technical errors**: Send application errors directly to the Inbox * **User feedback**: Allow users to report hallucinations or unhelpful answers * **Support integration**: Let customer service reps create issues from support tools ### API Usage ```bash POST /v0/issues Authorization: Bearer YOUR_API_KEY Content-Type: application/json { "title": "User reported incorrect product pricing", "description": "AI provided outdated pricing information", "priority": "HIGH", "metadata": { "conversation_id": "conv_123", "affected_product": "premium_plan" } } ``` Issues sent via the API go through the same pre-processing pipeline as any issue that Avido creates. Include as much context as possible for better summarization and categorization. **Important**: Always PII-scrub any personal information before sending to the API. ## Deduplication & Similar Issues The Inbox uses AI embeddings to automatically detect similar issues: * **Automatic Detection**: Flags potential duplicates with similarity percentages * **Merge Suggestions**: Recommends combining highly similar issues (>90% similarity) * **Pattern Recognition**: Groups related issues to reveal systemic problems This prevents alert fatigue and helps you see the bigger picture when multiple related issues occur. ## Best Practices ### Daily Triage * Start your day by reviewing new issues in the Inbox * Focus on High priority items first * Aim for "Inbox Zero" - clearing all unassigned urgent issues ### Effective Assignment * Assign based on expertise and current workload * Add notes when assigning to provide context * Reassign if someone is overwhelmed ### Maintain Inbox Hygiene * Dismiss false positives promptly with clear reasons * Merge duplicates to keep the inbox clean * Don't let issues accumulate - triage regularly ### Leverage AI Summaries * Use business summaries for quick understanding * Switch to technical view when diving into implementation * Trust the AI prioritization but apply your judgment ## Success Metrics The Inbox helps you track: * **Response Time**: How quickly issues are triaged and assigned * **Resolution Rate**: Percentage of issues successfully resolved * **False Positive Rate**: How many dismissed issues were non-actionable * **Pattern Detection**: Recurring issues that indicate systemic problems ## Current Limitations The Inbox is designed as a triage tool, not a full incident management platform: * No multi-step workflows or root-cause analysis features * No deep collaboration workspaces within the Inbox * Limited to standardized `/issues` API for external submissions ## Getting Started 1. **Automatic Setup**: The Inbox is ready to use immediately 2. **First Issues**: Will appear as soon as tests fail or issues are submitted 3. **Customization**: Contact your Avido team to adjust priority thresholds or configure integrations The Inbox transforms issue management from scattered alerts across multiple systems into a single source of truth, ensuring your team addresses what matters most and nothing falls through the cracks. # Welcome to Avido Docs 🚀 Source: https://docs.avidoai.com/introduction The AI Quality Assurance platform built for fintechs and financial services enterprises. ## Why Avido? Only 10% of financial enterprises have AI in full production. Avido changes that by providing the quality safety net your AI needs. * **Evaluation & Testing** – Simulate user interactions to rigorously test AI systems before and after deployment * **Continuous Monitoring** – Track safety, accuracy, and performance in live production environments * **Collaborative Tools** – SMEs and developers work together through an intuitive GUI – no coding required for domain experts * **Compliance-First** – Built for GDPR, EU AI Act with audit trails and standardized QA processes * **Automated System Journal** – Detect configuration changes automatically and prevent hidden regressions * **Quickstart** – Upload existing conversations to auto-generate test cases and evaluation criteria * **Documents** – AI-optimized knowledge management with version control and approval workflows Whether you're building a support assistant, an autonomous agent, or a RAG pipeline, Avido ensures your AI performs safely, accurately, and in compliance from launch through ongoing operations. *** ## How Avido fits into your app Diagram showing how Avido fits into your app 1. **Avido sends a webhook** – When a test is triggered, Avido sends a POST request to your endpoint with synthetic input and a testId. 2. **You validate the request** – Verify the webhook signature to ensure it's really from Avido. 3. **Run your AI workflow** – Process the synthetic input through your normal application flow. 4. **Log events along the way** – Capture LLM calls, tool usage, retrievals, and other key steps. 5. **Send the trace to Avido** – When your workflow completes, send the full event trace back to Avido. 6. **View evaluation results** – Avido runs your configured evaluations and displays results in the dashboard. *** ## Getting Started 1. **Install an SDK** ```bash npm i @avidoai/sdk-node # Node pip install avido # Python ``` 2. **Setup a webhook endpoint** in your application [Learn more](/webhooks) 3. **Start tracing events** in your application [Learn more](/traces) 4. **Create your knowledge base** with Documents [Learn more](/documents) ```ts client.ingest.create({ events }) ``` 5. **Upload existing data** to auto-generate test cases and evaluations [Learn more](/quickstart) 6. **Review your baseline performance** in the dashboard > Prefer pure HTTP? All endpoints are [documented here](/api-reference). *** ## Core concepts | Concept | TL;DR | | ------------------ | ------------------------------------------------------------------------------------- | | **Tests** | Automated runs of your workflow using synthetic input without exposing customer data. | | **Tasks** | Test cases that can be auto-generated from existing data or created manually. | | **Webhooks** | Avido triggers tests via POST requests – automated or through the UI. | | **Traces** | Ordered lists of events that reconstruct a conversation / agent run. | | **Events** | Atomic pieces of work (`llm`, `tool`, `retriever`, `log`). | | **Evaluations** | Built-in metrics + custom business logic tests created without code. | | **Documents** | Version-controlled, RAG-ready content that powers your AI's knowledge base. | | **Inbox** | Central hub where all issues are captured, summarized, and triaged automatically. | | **System Journal** | Automatic log of configuration changes and their impact on performance. | Dive deeper with the sidebar or jump straight to **[Traces](/traces)** to see how instrumentation works. *** ## Need help? * **Slack** – join via the *? Help* menu in Avido. * **Email** – [support@avidoai.com](mailto:support@avidoai.com) Happy building! ✨ # System Journal Source: https://docs.avidoai.com/system-journal Automated monitoring and documentation of AI model changes for compliance and operational excellence ![Avido System Journal](https://docs.avidoai.com/images/system-journal.jpg) ## What is the System Journal? The System Journal is an intelligent monitoring system that automatically tracks and documents all significant changes in your AI model deployments. It serves as your automated compliance officer and operational guardian, creating a comprehensive audit trail of configuration changes, parameter modifications, and data anomalies across all your AI traces. Think of it as a "black box recorder" for your AI systems—capturing every meaningful change that could impact model behavior, compliance, or evaluation accuracy. ### Key Capabilities * **Automatic Change Detection**: Identifies when model parameters (temperature, max tokens, top-k, etc.) change between deployments * **Missing Parameter Alerts**: Flags when critical parameters required for evaluation or compliance are absent * **Intelligent Journal Entries**: Generates human-readable descriptions of what changed, when, and by how much * **Complete Audit Trail**: Maintains a tamper-proof history of all configuration changes for regulatory compliance * **Zero Manual Overhead**: Operates completely automatically in the background ## Why Use the System Journal? ### For Engineering Teams **Prevent Configuration Drift** You'll immediately know when model parameters change unexpectedly. No more debugging mysterious behavior changes or performance degradations caused by unnoticed configuration updates. **Accelerate Incident Response** When something goes wrong, the System Journal provides instant visibility into what changed and when. Instead of digging through logs, you can see at a glance: "LLM Temperature Changed: 0.4 → 0.9" with timestamps and context. **Maintain Evaluation Integrity** Missing parameters can invalidate your evaluation results. The System Journal ensures you catch these gaps before they impact your metrics or decision-making. ### For Compliance Teams **Regulatory Readiness** Financial institutions face strict requirements for AI transparency and explainability. The System Journal automatically maintains the comprehensive audit trails regulators expect, reducing preparation time from weeks to minutes. **Risk Management** Track when risk-relevant parameters change (like content filtering thresholds or decision boundaries) and ensure they stay within approved ranges. **Evidence Collection** Every journal entry includes timestamps, before/after values, and trace context—everything needed for compliance reporting or incident investigation. ### Real-World Impact Consider this scenario: A wealth management firm's AI-powered client communication system suddenly starts generating longer, more creative responses. Without the System Journal, this could go unnoticed for weeks. With it, compliance immediately sees: "LLM Temperature Changed: 0.2 → 0.8" and "Max Tokens Changed: 150 → 500"—catching a potentially risky configuration change before it impacts clients. ## How to Use the System Journal ### Accessing the System Journal 1. Navigate to the **System Journal** section in your Avido dashboard 2. You'll see a chronological list of all journal entries across your AI systems ### Understanding Journal Entries Each System Journal entry includes: * **Type Indicator**: Visual icon showing the category (change, missing parameter, error) * **Clear Description**: Plain-language explanation like "Model version changed from gpt-4-turbo to gpt-4o" * **Timestamp**: Exact time the change was detected * **Trace Link**: Direct connection to the affected trace for deeper investigation * **Change Details**: Before and after values for parameter changes ### Viewing Journal Entries in Context #### On the Dashboard System Journal entries appear as reference lines on your metric graphs, showing exactly when changes occurred relative to performance shifts. #### In Trace Details When viewing individual traces, associated journal entries are displayed inline, providing immediate context for that specific execution. #### In the System Journal List The dedicated System Journal page provides: * Filtering by entry type, date range, or affected system * Sorting by relevance, time, or severity * Bulk actions for reviewing multiple entries ### Common Workflows #### Investigating Performance Changes 1. Notice a metric shift on your dashboard 2. Check for System Journal markers at the same timestamp 3. Click through to see what parameters changed 4. Review the trace details to understand the impact #### Compliance Review 1. Access the System Journal for your reporting period 2. Filter by the relevant AI system or model 3. Export the journal entries for your compliance report 4. Document any required explanations or approvals #### Responding to Alerts When the System Journal detects missing required parameters: 1. The entry clearly identifies what's missing (e.g., "Missing Parameter: topK") 2. Click through to the affected trace 3. Update your configuration to include the missing parameter 4. Verify the fix in subsequent traces ### Best Practices **Regular Review Schedule** Set up a weekly review of System Journal entries to catch trends before they become issues. **Tag Critical Parameters** Work with your compliance team to identify which parameters are critical for your use case, ensuring the System Journal prioritizes these in its monitoring. **Integrate with Your Workflow** * Set up alerts for critical changes * Include System Journal reviews in your deployment checklist * Reference journal entries in incident post-mortems **Use Journal Entries for Documentation** The System Journal creates a natural documentation trail. Use it to: * Track experiment results * Document approved configuration changes * Build a knowledge base of parameter impacts ### Advanced Features #### Entry Classification Different System Journal entry types use distinct visual indicators: * 🔄 **Configuration Changes**: Parameter modifications * ⚠️ **Missing Data**: Required fields not present * 🔍 **Anomalies**: Unusual patterns detected * ✅ **Resolutions**: Issues that have been addressed #### Intelligent Grouping When multiple parameters change simultaneously, the System Journal groups related changes into a single entry for easier understanding. #### Historical Analysis Compare current configurations against any point in history to understand evolution and drift over time. ## Getting Started 1. **No Setup Required**: The System Journal begins working automatically as soon as you start sending traces to Avido 2. **First Journal Entries**: You'll see your first entries within minutes of trace ingestion 3. **Customization**: Contact your Avido representative to configure specific parameters to monitor or adjust sensitivity thresholds The System Journal transforms AI observability from reactive firefighting to proactive governance, ensuring your AI deployments remain compliant, performant, and predictable. # Tracing Source: https://docs.avidoai.com/traces Capture every step of your LLM workflow and send it to Avido for replay, evaluation, and monitoring. When your chatbot conversation or agent run is in flight, **every action becomes an *event***.\ Bundled together they form a **trace** – a structured replay of what happened, step‑by‑step. | Event | When to use | | ----------- | ------------------------------------------------------------------------------- | | `trace` | The root container for a whole conversation / agent run. | | `llm` | Start and end of every LLM call. | | `tool` | Calls to a function / external tool invoked by the model. | | `retriever` | RAG queries and the chunks they return. | | `log` | Anything else worth seeing while debugging (system prompts, branches, errors…). | The full schema lives in API ▸ Ingestion. *** ## Recommended workflow 1. **Collect events in memory** as they happen. 2. **Flush** once at the end (or on fatal error). 3. **Add a `log` event** describing the error if things blow up. 4. **Keep tracing async** – never block your user. 5. **Evaluation‑only mode?** Only ingest when the run came from an Avido test → check for `testId` from the [Webhook](/webhooks). 6. **LLM events** should contain the *raw* prompt & completion – strip provider JSON wrappers. *** ## Ingesting events You can send events: * **Directly via HTTP** * **Via our SDKs** (`avido`) ```bash cURL (default) curl --request POST \ --url https://api.avidoai.com/v0/ingest \ --header 'Content-Type: application/json' \ --header 'x-api-key: ' \ --data '{ "events": [ { "type": "trace", "timestamp": "2025-05-15T12:34:56.123455Z", "referenceId": "123e4567-e89b-12d3-a456-426614174000", "metadata": { "source": "chatbot" } }, { "type": "llm", "event": "start", "traceId": "123e4567-e89b-12d3-a456-426614174000", "timestamp": "2025-05-15T12:34:56.123456Z", "modelId": "gpt-4o-2024-08-06", "params": { "temperature": 1.2 }, "input": [ { "role": "user", "content": "Tell me a joke." } ] } ] }' ``` ```ts Node import Avido from 'avido'; const client = new Avido({ applicationId: 'My Application ID', apiKey: process.env['AVIDO_API_KEY'], // optional – defaults to env }); const ingest = await client.ingest.create({ events: [ { timestamp: '2025-05-15T12:34:56.123456Z', type: 'trace', testId: 'INSERT UUID' }, { timestamp: '2025-05-15T12:34:56.123456Z', type: 'tool', toolInput: 'the input to a tool call', toolOutput: 'the output from the tool call' } ], }); console.log(ingest.data); ``` ```python Python import os from avido import Avido client = Avido( api_key=os.environ.get("AVIDO_API_KEY"), # optional – defaults to env ) ingest = client.ingest.create( events=[ { "timestamp": "2025-05-15T12:34:56.123456Z", "type": "trace", }, { "timestamp": "2025-05-15T12:34:56.123456Z", "type": "trace", }, ], ) print(ingest.data) ``` *** ## Tip: map your IDs If you already track a conversation / run in your own DB, pass that same ID as `referenceId`.\ It makes liftover between your system and Avido effortless. *** ## Next steps * Inspect traces in **Traces** inside the dashboard. Need more examples or have a tricky edge case? [Contact us](mailto:support@avidoai.com) and we’ll expand the docs! 🎯 # Webhooks Source: https://docs.avidoai.com/webhooks Trigger automated Avido tests from outside your code and validate payloads securely. ## Why webhook‑triggered tests? **Part of Avido's secret sauce is that you can kick off a test *without touching your code*.**\ Instead of waiting for CI or redeploys, Avido sends an HTTP `POST` to an endpoint that **you** control. | Benefit | What it unlocks | | ----------------------- | ------------------------------------------------------------- | | **Continuous coverage** | Run tests on prod/staging as often as you like and automated. | | **SME‑friendly** | Non‑developers trigger & tweak tasks from the Avido UI. | *** ## How it works 1. **A test is triggered** in the dashboard or automatically. 2. **Avido POSTs** to your configured endpoint. 3. **Validate** the `signature` + `timestamp` with our API/SDK. 4. **Run your LLM flow** using `prompt` from the payload. 5. **Emit a trace** that includes `testId` to connect results in Avido. 6. **Return `200 OK`** – any other status tells Avido the test failed. ### Payload example ```json Webhook payload { "testId": "123e4567-e89b-12d3-a456-426614174000", "prompt": "Write a concise onboarding email for new users." } ``` Headers: | Header | Purpose | | ------------------- | ---------------------------------------- | | `x-avido-signature` | HMAC signature of the payload | | `x-avido-timestamp` | Unix ms timestamp the request was signed | *** ## Verification flow ```mermaid sequenceDiagram Avido->>Your Endpoint: POST payload + headers Your Endpoint->>Avido API: /v0/validate-webhook Avido API-->>Your Endpoint: { valid: true } Your Endpoint->>LLM: run(prompt) Your Endpoint->>Avido Traces API: POST /v0/traces (testId) ``` If validation fails, respond **401** (or other 4xx/5xx). Avido marks the test as **failed**. *** ## Code examples ```bash cURL (default) curl --request POST --url https://api.avidoai.com/v0/validate-webhook --header 'Content-Type: application/json' --header 'x-api-key: ' --data '{ "signature": "abc123signature", "timestamp": 1687802842609, "body": { "testId": "123e4567-e89b-12d3-a456-426614174000", "prompt": "Write a concise onboarding email for new users." } }' ``` ```ts Node import express from 'express'; import { Avido } from '@avidoai/sdk-node'; const app = express(); app.use(express.json()); const client = new Avido({ apiKey: process.env.AVIDO_API_KEY! }); app.post('/avido/webhook', async (req, res) => { const signature = req.get('x-avido-signature'); const timestamp = req.get('x-avido-timestamp'); const body = req.body; try { const { valid } = await client.validateWebhook({ signature, timestamp, body }); if (!valid) return res.status(401).send('Invalid webhook'); const result = await runAgent(body.prompt); // 🤖 your LLM call await client.traces.create({ testId: body.testId, input: body.prompt, output: result }); return res.status(200).send('OK'); } catch (err) { console.error(err); return res.status(500).send('Internal error'); } }); ``` ```python Python import os from flask import Flask, request, jsonify from avido import Avido app = Flask(__name__) client = Avido(api_key=os.environ["AVIDO_API_KEY"]) @app.route("/avido/webhook", methods=["POST"]) def handle_webhook(): body = request.get_json(force=True) or {} signature = request.headers.get("x-avido-signature") timestamp = request.headers.get("x-avido-timestamp") if not signature or not timestamp: return jsonify({"error": "Missing signature or timestamp"}), 400 try: resp = client.validate_webhook.validate( signature=signature, timestamp=timestamp, body=body ) if not resp.valid: return jsonify({"error": "Invalid webhook signature"}), 401 except Exception as e: return jsonify({"error": str(e)}), 401 result = run_agent(body.get("prompt")) # your LLM pipeline client.traces.create( test_id=body.get("testId"), input=body.get("prompt"), output=result ) return jsonify({"status": "ok"}), 200 ``` *** ## Next steps * Send us [Trace events](/traces). * Schedule or manually tasks from **Tasks** in the dashboard. * Invite teammates so they can craft evals and eyeball results directly in Avido. ***