Debugging Agent Issues
Step-by-step guide to identifying, analyzing, and resolving issues with your AI agents by utilizing your AI assistant's debugging capabilities
Debugging Agent Issues
If you noticed issues with your agent, the easiest way to debug is to use your AI assistant to investigate the issue.
Describe the Issue to your AI assistant
Open your AI assistant (or your preferred AI coding agent) and describe the issue you're experiencing. Depending on the nature of the issue, your description can contain different information.
Using Specific Completion Links: If you've identified a specific problematic completion, you can copy the completion ID from the completions detail view (button in the top right of the modal) and share it:
This completion anotherai/completion/0198c34b-ff24-73cb-57d8-a67851e0cf10
input tone was enthusiastic, but the rewritten email isn't very enthusiastic.
Help me understand what's going wrong.Using Metadata (Especially Useful for Customer Issues): If you receive a report of an issue from a user and utilize metadata - like user emails or ids - to tie completions to a specific user, you can debug more generally without needing specific completion IDs.
john@example.com reported that their email was not rewritten in the correct tone by
@email_reimaginer. Find why the agent did not work well for customer john@example.com
and help me understand how to fix the issue.If you don't yet utilize metadata, and want to add it, you can learn more here.
Description of the issue only: If you don't have a specific completion ID or metadata, you can just describe the issue you're seeing to your AI assistant:
I'm seeing an issue with some of the recent completions on anotherai/agent/email_reimaginer. The emails are not being rewritten in the requested tones. Help me understand what needs to be updated to fix the issue.Your AI assistant Does the Rest!
Your AI assistant will debug for you by examining the completion and agent details and input variables. After the issue is identified, you can use your AI assistant to help you create an experiment to test potential fixes before updating your agent's code or agent's deployment.
Common Issues your AI assistant Can Help Debug
- Prompt engineering problems - Suboptimal prompts leading to poor outputs
- Input validation issues - A required input variable is empty
- Model selection problems - Wrong model chosen for the task
- Performance bottlenecks - Slow response times, requests timing out before completion, or high latency
Tips
- Try to provide either a completion link or specific metadata key and value (like user ID or session ID) to help your AI assistant locate and analyze the problematic requests.
- Ask your AI assistant to create an experiment to test your updates with a few different inputs to avoid missing surprise regressions with your updates.
How is this guide?
Evaluating an Agent
Learn how to use datasets and LLMs as a judge to create a robust evaluation system for your AI agents.
Update Models and Prompts Without Code Changes
This guide walks you through the process of setting up and using deployments. Deploying a version of an agent allows you to make subsequent updates to that agent without changing code for most common changes.