Migrating from WorkflowAI to AnotherAI
Migrating from WorkflowAI to AnotherAI
Context
WorkflowAI is a layer of abstraction on top of LLM providers, that similarly to AnotherAI allows hitting many providers in the same manner.
The main difference is that in most cases WorkflowAI has a "structured data in structured data out" approach and does not use the message primitive in its basic API.
While not the most common usage, WorkflowAI also offers an OpenAI completion API compatible endpoint.
WorkflowAI and AnotherAI share the same authorization servers. So users of WorkflowAI can use the same account on AnotherAI. API Keys are however not transferable. WorkflowAI API keys start with
wai-
and AnotherAI API keys start withaai-
.
Migration Steps
Very important: Most WorkflowAI agents use deployments on WorkflowAI, meaning that the code does not contain the prompt. The prompt is stored and managed by the WorkflowAI backend. It is possible to setup a bridge that forwards data from WorkflowAI to AnotherAI which makes the prompt and/or deployment available in AnotherAI.
Step 1: Identify the agent id and deployment environment
You will first need to identify:
- the agent id, which will be the same in AnotherAI as in WorkflowAI
- the deployment environment ("production", "staging", "dev")
They both can be extracted from the code.
// WorkflowAI client setup
// This should be replaced with the OpenAI client setup pointing to AnotherAI
const workflowAI = new WorkflowAI({
key: process.env["WORKFLOWAI_API_KEY"],
});
// Set up types
// Input type can likely be re-used in a function
export interface AnalyzeBookCharactersTaskInput {
book_title?: string;
}
// Output type will have to converted to a Zod schema to be compatible with the OpenAI beta SDK
export interface AnalyzeBookCharactersTaskOutput {
characters?: {
name?: string;
goals?: string[];
weaknesses?: string[];
outcome?: string;
}[];
}
// Set up the agent
const analyzeBookCharacters: Agent<
AnalyzeBookCharactersTaskInput,
AnalyzeBookCharactersTaskOutput
> = workflowAI.agent({
id: "analyze-book-characters", // agent id, will be the same in AnotherAI
schemaId: 1, // schema id, can be ignored, will no longer be used in AnotherAI
version: "production", // deployment envoronment, in WorkflowAI deployments are unique per agent schema
});
WorkflowAI exposes a run
endpoint per agent and schema. The full url will look like https://run.workflowai.com/v1/agents/<agent_id>/schemas/<schema_id>/run
or https://run.workflowai.com/v1/tasks/<agent_id>/schemas/<schema_id>/run
where:
<agent_id>
is a slug that is the id of the agent<schema_id>
is an integer that identifies a schema (not used in AnotherAI)
The payload will look like:
POST https://run.workflowai.com/v1/agents/analyze-book-characters/schemas/1/run
Authorization: Bearer aai-***
Content-Type: application/json
{
"version": "production", // deployment_id, in WorkflowAI deployments are unique per agent schema
"task_input": {
"book_title": "The Shadow of the Wind"
},
"metadata": {
// metadata is optional
}
}
Step 2: Check if a deployment has already been migrated
In WorkflowAI, deployments are unique per agent schema. In AnotherAI, deployments are unique accross all agents and the concept of schema is removed. Instead, it is a good practice to include the agent_id into the deployment id and add a deployment number.
You can fetch all existing deployments for a given agent using the list_deployments
tool with the agent_id parameter.
A migrated deployment will likely have the shape: <agent_id>:<deployment_deployment_environment>#<deployment_number>
In the example above, the migrated deployment will look like: analyze-book-characters:production#1
.
It is possible that a deployment was created with a slightly different format. Check the available deployments using the
list_deployments
tool and make sure to adjust the deployment id accordingly.
If the deployment has not been migrated, tell the user to contact the WorkflowAI support.
Step 3: Convert the WorkflowAI code to point to AnotherAI when a deployment has already been migrated for a standard WorkflowAI agent
AnotherAI is compatible with the OpenAI completion API, so it should be configured just like any other OpenAI client. You can use the create_api_key
tool to create an API key if needed. Be mindful of any existing frameworks that are used to make completion calls and follow the appropriate documentation. In some cases, you might have to adapt the response format (aka output_schema
) which is a property of the version in AnotherAI.
To use a deployment, simply use the deployment id in the model parameter in the completion call.
import OpenAI from "openai";
import { zodTextFormat } from "openai/helpers/zod";
import { z } from "zod";
const client = new OpenAI({
baseURL: "https://api.anotherai.dev/v1",
apiKey: process.env["ANOTHERAI_API_KEY"],
});
// Typescript here is sufficient
export interface AnalyzeBookCharactersTaskInput {
book_title?: string;
}
// ----------------------------------------------------------------
// Without using Zod
export interface AnalyzeBookCharactersTaskOutput {
characters?: {
name?: string;
goals?: string[];
weaknesses?: string[];
outcome?: string;
}[];
}
export async function analyzeBookCharacters(input: AnalyzeBookCharactersTaskInput): Promise<AnalyzeBookCharactersTaskOutput> {
const completion = await client.chat.completions.create({
model: "anotherai/deployment/analyze-book-characters:production#1", // use the "anotherai/deployment/<deployment_id>" id here
messages: [], // messages should remain empty since the prompt is stored in AnotherAI
// response_format is not needed here since it is handled by the deployment
input: input, // input is AnotherAI specific. You may have to silence a TS error
});
return JSON.parse(completion.choices[0].message.content)
}
// ----------------------------------------------------------------
// Or if using Zod and the beta OpenAI client
// Output needs to be converted to a Zod schema to be compatible with the OpenAI beta SDK
// OutputSchema is available in `version.output_schema.json_schema`
const AnalyzeBookCharactersTaskOutputSchema = z.object({
characters: z.array(z.object({
name: z.string(),
goals: z.array(z.string()),
weaknesses: z.array(z.string()),
outcome: z.string(),
})),
});
export type AnalyzeBookCharactersTaskOutput = z.infer<typeof AnalyzeBookCharactersTaskOutputSchema>;
export async function analyzeBookCharacters(input: AnalyzeBookCharactersTaskInput): Promise<AnalyzeBookCharactersTaskOutput> {
// Use .parse() instead of .create() when using the beta client
const completion = await client.beta.chat.completions.parse({
model: "anotherai/deployment/analyze-book-characters:production#1", // use the "anotherai/deployment/<deployment_id>" id here
messages: [], // messages should remain empty since the prompt is stored in AnotherAI
response_format: zodTextFormat(AnalyzeBookCharactersTaskOutputSchema, "AnalyzeBookCharactersTaskOutput"),
input: input, // input is AnotherAI specific. You may have to silence a TS error
});
return completion.output_parsed
}
import (
"context"
// Make sure to use the v2
"github.com/openai/openai-go/v2"
"github.com/openai/openai-go/v2/option"
"github.com/openai/openai-go/v2/shared"
"github.com/invopop/jsonschema" // library to generate JSON schemas if needed
)
// setup AnotherAI client
var client = openai.NewClient(
option.WithBaseURL("https://api.anotherai.dev/v1"),
option.WithAPIKey(os.Getenv("ANOTHERAI_API_KEY")),
)
// types are likely already present in the users' code
type AnalyzeBookCharactersTaskInput struct {
BookTitle string `json:"book_title"`
}
type AnalyzeBookCharactersTaskOutput struct {
Characters []struct {
Name string `json:"name"`
Goals []string `json:"goals"`
Weaknesses []string `json:"weaknesses"`
Outcome string `json:"outcome"`
} `json:"characters"`
}
// Generate the JSON schema for the output
var AnalyzeBookCharactersTaskOutput = jsonschema.Reflect(&AnalyzeBookCharactersTaskOutput{})
func AnalyzeBookCharacters(input AnalyzeBookCharactersTaskInput) (AnalyzeBookCharactersTaskOutput, error) {
params := openai.ChatCompletionNewParams{
// Pass a plain string here
Model: "anotherai/deployment/analyze-book-characters:production#1",
// No need to pass messages
Metadata: shared.Metadata{
//...any user provided metadata. No need to pass agent_id here since it will be passed by the deployment
},
ResponseFormat: openai.ChatCompletionNewParamsResponseFormatUnion{
OfJSONSchema: &openai.ResponseFormatJSONSchemaParam{
JSONSchema: openai.ResponseFormatJSONSchemaJSONSchemaParam{
Name: "analyze-book-characters-task-output",
Schema: AnalyzeBookCharactersTaskOutput,
},
},
},
}
// input is AnotherAI specific
params.SetExtraFields(map[string]any{
"input": input, // will be serialized as JSON
})
chatCompletion, err := client.Chat.Completions.New(
context.TODO(),
params,
)
out := AnalyzeBookCharactersTaskOutput{}
if err != nil {
// handle error if needed
return out, err
}
content := chatCompletion.choices[0].message.content
if err := json.Unmarshal([]byte(content), &out); err != nil {
// handle error if needed
return out, err
}
return out, nil
}
It is likely that an OpenAI SDK exists for the requested language. The OpenAI SDKs are usually more convenient to use, providing out of the box:
- retries / error management
- structured output parsing
- tool calling handling
It is also possible that the user already has an Agent framework setup that is compatible with the completion API. If re-using an existing Agent framework, it is a good idea to create a separate client as to avoid forcing the user to migrate the entirety of its completion calls.
If hitting the API directly is needed, the payload will look like:
{
"model": "anotherai/deployment/analyze-book-characters:production#1", // use the "anotherai/deployment/<deployment_id>" id here
"messages": [],
// no need to pass the response_format here, it is handle by the deployment
"input": ..., // corresponds to task_input in WorkflowAI,
"metadata": {
// metadata is optional
}
}
The code is now migrated to AnotherAI.
Step 3 bis: Migrating a WorkflowAI agent that uses the OpenAI completion API compatible endpoint
A WorkflowAI agent that uses the completion API will also have a configured OpenAI client. This client should be updated to point to AnotherAI's base URL and API key.
Example code to be converted:
import OpenAI from 'openai';
client = OpenAI(
base_url="https://run.workflowai.com/v1/",
api_key=os.environ["WORKFLOWAI_API_KEY"],
)
When hitting the API directly, an http client is usually configured with the base URL and API key. The exact client depends on the language and libraries that are used.
Simply make sure that:
- the base URL points to
https://api.anotherai.dev
. The full completion URL should look likehttps://api.anotherai.dev/v1/chat/completions
- the Authorization header looks like
Authorization: Bearer aai-***
whereaai-***
is the AnotherAI API key
You can tell whether or not the agent uses a deployment by checking the model in the OpenAI completion call.
Any model with the format <agent_id>/#<a number>/<environment, e-g:production|staging|dev>
is a WorkflowAI deployment.
// This is a WorkflowAI deployment
const completion = await client.chat.completions.create({
model: "travel-assistant/#1/production",
});
// This is not a WorkflowAI deployment
const completion = await client.chat.completions.create({
model: "gpt-4o",
});
Uses deployments:
POST https://api.anotherai.dev/v1/chat/completions
Authorization: Bearer aai-***
Content-Type: application/json
{
"model": "travel-assistant/#1/production",
# messages are optional here
...
}
Does not use deployments, calls a model directly
POST https://api.anotherai.dev/v1/chat/completions
Authorization: Bearer aai-***
Content-Type: application/json
{
"model": "gpt-4o",
# messages are required here
...
}
If the agent uses a model directly, there is nothing to do. Changing the OpenAI client config will be enough.
If the agent uses a deployment, you will need to adjust the deployment_id
to match the imported one, don't forget to prefix it with anotherai/deployment/
.
How is this guide?