Overview
What is AnotherAI?
AnotherAI turns your AI coding assistant into your AI engineer.
Let Claude Code, Cursor, and Windsurf operate your AI agents through MCP. Compatible with any programming language and SDK (OpenAI, Vercel AI, PydanticAI, Instructor), access every LLM model from: OpenAI, Anthropic, Google, Meta, DeepSeek, Mistral, and more. Fully open-source.
With AnotherAI, your coding assistant becomes an AI engineer that can:
- Debug production issues: "Why is the email parser failing?" → Queries logs, identifies patterns, deploys fix
- Run experiments: "Compare GPT-4o vs Claude Sonnet 4 on accuracy and cost" → A/B tests models and prompts with production data
- Deploy without code changes: "Make the support bot more concise" → Updates prompts instantly, no PR needed
- Optimize costs: "Reduce AI spend by 30%" → Analyzes usage, switches to cheaper models where quality permits
- Improve from feedback: "Analyze user feedback for the invoice parser" → Reviews collected feedback, identifies issues, suggests prompt improvements
- Analyze performance: "Show response times by model this week" → Generates custom metrics and visualizations
- Evaluate quality: "Test if the new prompt maintains accuracy" → Runs evaluation suites, compares against baselines
By using AnotherAI, you won't pay more than your current inference costs. We price match the providers and make our margin through volume discounts. Learn more about our pricing.
Get Started
Set Up
To use AnotherAI's hosted service: go to https://anotherai.dev/ and sign up to create a free account.
Are you interested in a self-hosted, open-source set up? We have that too! You can learn how to get set up here.
MCP
AnotherAI is available as an MCP server in the IDEs below.
- Open Cursor
- Go to
Settings...
- Navigate to
Cursor Settings
- Select
Tools and Integrations
- Select
+ New MCP Server
Add the following configuration to your MCP servers config:
{
"mcpServers": {
"anotherai": {
"url": "https://api.anotherai.dev/mcp",
"headers": {
"Authorization": "Bearer <your_api_key_here>"
}
}
}
}
Cursor UI should now look like this (note the green indicator and the number of tools enabled displayed):
- Complete the set up above
- Get your API key on anotherai.dev/
- Open Claude Code in your preferred terminal (standalone or within your IDE).
- Type the following to install:
claude mcp add --scope user anotherai https://api.anotherai.dev/mcp --transport http --header "Authorization: Bearer YOUR_API_KEY_HERE"
Installation Scopes: The --scope user
flag installs AnotherAI for your personal use across all projects. For team projects, you can use --scope project
instead to create a shared .mcp.json
file that can be committed to version control and shared with your team members. Learn more about MCP scopes in the Anthropic documentation.
- Type the following to verify the server is properly connected:
claude
Then type:
/mcp
You should see the AnotherAI server listed as "connected".
- Open Windsurf
- Go to
Settings...
- Select
Windsurf Settings
- Navigate to
Cascade
- Select
Manage MCPs
- Select
View Raw Config
Add the following configuration to your MCP servers config:
{
"mcpServers": {
"anotherai": {
"url": "https://api.anotherai.dev/mcp",
"headers": {
"Authorization": "Bearer <your_api_key_here>"
}
}
}
}
Navigate back to Manage MCPs and refresh the page. The AnotherAI MCP should appear and show as enabled.
Cursor CLI can access MCPs configured in your IDE's mcp.json
configuration file, enabling the same MCP servers and tools that you've configured for the IDE.
Setup steps:
- Complete the set up steps described above and ensure the AnotherAI MCP server is running
- Configure the AnotherAI MCP in your IDE's
mcp.json
file, as described in theCursor
tab. - Once the MCP is enabled and active, you can ask the Cursor CLI to interact with it
Additional Configuration: If you find that the CLI claims it can't find or use the AnotherAI MCP, you may need to give it the specific file path to use to look for the mcp.json file.
Try it out
After you have the above set up completed, AnotherAI is ready to use!
-
If you have agents already created, check out how to migrate them to AnotherAI.
-
If you don't have any agents built yet, check out building a new agent to learn about building a new AnotherAI-compatible agent.
Once you have an agent built or migrated, here are some examples of prompts you can send to Claude Code to test how AnotherAI works:
Create an experiment in AnotherAI that compares how GPT-5 mini and GPT-4 mini perform on @[your-agent-name-here].
Use AnotherAI to help me find a faster model for @[your-agent-name-here] while still maintaining accuracy.
AnotherAI's Zero Lock-in Promise
It's easier to leave AnotherAI than to leave OpenAI directly.
To switch away from AnotherAI:
- Remove
base_url="https://api.anotherai.dev/v1"
- If you're not using deployments: your code stays identical
- If you are using deployments: simply ask Claude Code (or your preferred coding agent) to fetch your deployment configuration (static content and model) and recreate the equivalent configuration in your code.
While using AnotherAI, you:
- Get automatic fallbacks across providers
- Collect valuable performance data about your AI usage
- Can export everything when/if you leave
We built AnotherAI this way because we've been burned by vendor lock-in too. The OpenAI-compatible design isn't just about easy onboarding - it's about respecting your right to walk away.
Bottom line: Try AnotherAI risk-free. If it doesn't provide value, switching back takes minutes, not months.
Join the community
If you have questions about AnotherAI, reach out to our team and community on our community Slack.
How is this guide?