Skip to main content

How to Build an AI agent

Build custom AI agents with persistent instructions, knowledge bases, connected tools, and multi-agent workflows using Squid's client SDK.

Why Build an AI Agent with Squid

Adding AI capabilities to your app typically means stitching together an LLM API, a vector database for context retrieval, tool-calling logic, conversation memory, and security rules. Each piece requires its own integration work.

Squid handles all of this in a unified platform. You define an agent with instructions and abilities, connect it to your data sources and tools, and interact with it through a single SDK. Squid manages prompt construction, context retrieval, memory, and orchestration so you can focus on the experience you want to create.

How it Works

Under the hood, agents use a Large Language Model (LLM) to generate answers to users' questions. When a user asks a question, the persistent instructions and most relevant context are passed to the LLM as part of the prompt, providing the user with a contextualized answer.

Squid lets you choose the LLM for your AI agent, allowing you to find the best fit for your use case. The following LLM providers are available out of the box:

You can also connect to additional providers by adding an AI connector. This allows you to use self-hosted models (e.g., Ollama, vLLM), AWS Bedrock models, or any other OpenAI-compatible endpoint.

Building an Agent

An agent represents a distinct personality or setup for your AI workflow. Each agent is like a different persona or use case, distinguished by its own set of instructions and abilities. This design enables customized responses from the AI according to the specific agent.

Note

The following examples show how to create an agent using Squid's SDKs. If you are unfamiliar with developing using the Squid platform and SDKs, please read our documentation on fullstack development.

Upserting an Agent

To create or update an AI agent programmatically, use the upsert() method, specifying which agent ID should be created or updated:

Client code
await squid
.ai()
.agent('banking-copilot')
.upsert({
options: {
model: 'gpt-4o',
},
isPublic: true,
});

When inserting an agent, pass an options object with a model field indicating the model the agent will use.

The isPublic parameter determines whether the chat functionality of the given agent can be accessed without setting up security rules.

Deleting an Agent

To delete an existing agent, use the delete() method:

Client code
await squid.ai().agent('banking-copilot').delete();

This function results in an error if no agent exists for the provided agent ID.

Updating the Model

To change the LLM model an agent uses, call updateModel():

Client code
await squid.ai().agent('banking-copilot').updateModel('claude-sonnet-4-6');

The model can be a vendor model name (e.g. 'gpt-4o', 'claude-sonnet-4-6', 'gemini-2.0-flash') or an integration-based model for additional providers like Ollama, AWS Bedrock, or any OpenAI-compatible endpoint:

Client code
await squid.ai().agent('banking-copilot').updateModel({
integrationId: 'my-ollama',
model: 'llama3',
});

You can also override the model per-request when calling ask() or chat() using the model option. See Ask Options for details.

Setting the Agent Description

To set or update a human-readable description for the agent, use the setAgentDescription() method. This updates only the description without affecting other agent configuration:

Client code
await squid
.ai()
.agent('banking-copilot')
.setAgentDescription('Assists customer support staff with banking and finance questions');

Alternatively, you can use upsert() to set the description along with all other agent values in a single call. Note that upsert() replaces the entire agent configuration, so any fields not included will be cleared.

Instructions

Instructions set the rules for how the agent responds to prompts and answers questions. They should be direct and simple, and explain the purpose of the agent. Instructions are supplied as a block of text.

Adding Instructions

To add or edit instructions for an AI agent, use the updateInstructions() method, passing the instruction data as a string.

Client code
const instruction = 'You are a helpful copilot that assists customer support staff by providing answers to their questions about banking and finance products.';
await squid.ai().agent('banking-copilot').updateInstructions(instruction);

Context

Context tells the agent what knowledge to pull from when answering questions and is the same as the Knowledge Base ability in the Agent Studio. Adding context allows the agent to provide relevant answers on specific topics that may not be part of the underlying AI model.

The following are simple code examples, though the context you add can be much more complex. Some good examples of context include resources like code documentation, product manuals, business operations (e.g., store hours) and user-specific data. You can mix and match context types to create a robust knowledge base for your AI agent, ensuring that it can provide any information your users will need.

Creating a Knowledge Base

To add or update agent context, you must first create and connect a knowledge base.

First, create the new knowledge base with an embedding model that we provide out of the box:

Client code
await squid.ai().knowledgeBase('banking-knowledgebase').upsertKnowledgeBase({
description: 'This Knowledge Base contains information on card data',
name: 'banking knowledgebase',
embeddingModel: 'text-embedding-3-small',
chatModel: 'gpt-4o',
metadataFields: [],
});

Or, you can use an integration-based embedding model by passing an object with the connector ID, model name, and dimensions. See the OpenAI Compatible Embedding connector for setup instructions.

Client code
await squid
.ai()
.knowledgeBase('banking-knowledgebase')
.upsertKnowledgeBase({
description: 'This Knowledge Base contains information on card data',
name: 'banking knowledgebase',
embeddingModel: {
integrationId: 'my-embeddings',
model: 'text-embedding-3-small',
dimensions: 1536,
},
chatModel: 'gpt-4o',
metadataFields: [],
});

Upserting Context

To add context to the knowledge base, use the upsertContext() method, passing the context and its type.

The upsertContext() method accepts a context ID. Providing a context ID allows you to more easily access context later for when you want to make changes.

Client code
const data = `Platinum Mastercard® Fair Credit, No annual fee. Flexible due dates...`;

await squid.ai().knowledgeBase('banking-knowledgebase').upsertContext({
type: 'text',
title: 'Credit Card Info',
text: data,
contextId: 'credit-cards',
});

Alternatively, use upsertContexts() to upsert an array of contexts.

Client code
const creditCard1 = `Platinum Mastercard® Fair Credit, No annual fee. Flexible due dates...`;
const creditCard2 = `Gold Mastercard®, $50 annual fee. Due dates once a month...`;

await squid
.ai()
.knowledgeBase('banking-knowledgebase')
.upsertContexts([
{
type: 'text',
title: 'Credit Card 1 Info',
text: creditCard1,
contextId: 'credit-cards1',
},
{
type: 'text',
title: 'Credit Card 2 Info',
text: creditCard2,
contextId: 'credit-cards2',
},
]);

Connect a Knowledge Base to an Agent

Use setAgentOptionInPath() to give an agent access to one or more knowledge bases without affecting other agent configuration. The description tells the agent when to consult each knowledge base:

Client code
await squid
.ai()
.agent('banking-copilot')
.setAgentOptionInPath('connectedKnowledgeBases', [
{
knowledgeBaseId: 'banking-knowledgebase',
description: 'Use for information on credit cards',
},
]);

To disconnect all knowledge bases, pass an empty array:

Client code
await squid
.ai()
.agent('banking-copilot')
.setAgentOptionInPath('connectedKnowledgeBases', []);

You can also use upsert() to set connected knowledge bases along with all other agent values in a single call. Note that upsert() replaces the entire agent configuration, so any fields not included will be cleared.

To include document metadata in the knowledge base results provided to the agent, set includeMetadata to true in the agent options or per-request:

Client code
await squid.ai().agent('banking-copilot').ask('Tell me about our credit cards', {
includeMetadata: true,
});

Context Types

Two types of contexts are supported: text and file.

Text context is created with a string that contains the context:

Client code
const data = `Platinum Mastercard® Fair Credit, No annual fee. Flexible due dates...`;

await squid.ai().knowledgeBase('banking-knowledgebase').upsertContext({
type: 'text',
title: 'Credit Card Info',
text: data,
contextId: 'credit-cards',
});

File context is created by providing a File object as a second parameter to the upsertContext() method. The file is then uploaded to Squid and the context is created from the file contents.

Client code
const file = new File([contextBlob], 'CreditCardList.pdf', { type: 'application/pdf' });

await squid.ai().knowledgeBase('banking-knowledgebase').upsertContext(
{
type: 'file',
contextId: 'credit-cards',
},
file
);
Note

Your context can be as long as you like; however since there are character limits to LLM prompts, only a portion of your context may actually be included alongside the user's inquiry. When constructing a prompt, Squid decides which portions of the supplied context are most relevant to the user's question.

Getting Context

To get a list of all contexts, use the listContexts() method. This method returns an array of agent context objects, which includes the contextId:

Client code
await squid.ai().knowledgeBase('banking-knowledgebase').listContexts();

To get a specific context item, use the getContext() method, passing the context ID:

Client code
await squid.ai().knowledgeBase('banking-knowledgebase').getContext('credit-cards');

Deleting Context

To delete a context entry, use the deleteContext() method:

Client code
await squid.ai().knowledgeBase('banking-knowledgebase').deleteContext('credit-cards');

This method results in an error if an entry has not yet been created for the context ID provided.

Context Metadata

When adding or updating the context of an AI knowledge base, you can optionally provide context metadata. Metadata is an object where keys can have a type of string, number, or boolean. Adding metadata provides additional information about the context that can then be used when interacting with your agent. The following example shows adding a PDF as context and providing two key/value pairs as metadata:

Client code
const file = new File([contextBlob], 'CreditCardList.pdf', { type: 'application/pdf' });

await squid
.ai()
.knowledgeBase('banking-knowledgebase')
.upsertContext(
{
contextId: 'credit-cards',
type: 'file',
metadata: { company: 'Bank of America', year: 2023 },
},
file
);

You can then use metadata when chatting with your AI agent, as shown in the filtering context with metadata section.

Interacting with your Agent

Once an agent has been created, you're ready to start asking questions or giving prompts.

Getting a Full Response with ask()

Use the ask() method to send a prompt and receive the complete response as a string:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('Which credit card is best for students?');

Getting Responses with Annotations

Use askWithAnnotations() to receive the response along with any file annotations (e.g., generated images or documents):

Client code
const { responseString, annotations } = await squid
.ai()
.agent('banking-copilot')
.askWithAnnotations('Generate a comparison chart of our credit cards');

Streaming Responses with chat()

Use the chat() method to stream responses token by token. This returns an RxJS Observable<string> that emits the accumulated response as each token arrives, which is ideal for displaying real-time responses in a UI:

Client code
import { Subscription } from 'rxjs';

const stream = squid
.ai()
.agent('banking-copilot')
.chat('Which credit card is best for students?');

const subscription: Subscription = stream.subscribe({
next: (accumulatedResponse) => {
// Each emission contains the full response so far
console.log(accumulatedResponse);
},
complete: () => {
console.log('Response complete');
},
error: (err) => {
console.error('Error:', err);
},
});

The chat() method accepts the same options as ask() (except voiceOptions), plus an additional smoothTyping option (defaults to true) that adds a slight delay between tokens for a natural typing effect.

Ask Options

Both ask() and chat() accept an optional options parameter to configure the request. To view a full list of available options and their default values, refer to the API reference documentation.

Client code
await squid.ai().agent('banking-copilot').ask('Which credit card is best for students?', {
maxOutputTokens: 4096,
temperature: 0.7,
model: 'claude-sonnet-4-6',
});

Memory and Chat History

By default, agents remember previous messages within a session. Use memoryOptions to control this behavior:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('What did I ask earlier?', {
memoryOptions: {
memoryMode: 'read-write', // 'none' | 'read-only' | 'read-write'
memoryId: 'user-123-session', // Unique ID for this conversation
expirationMinutes: 60, // How long to keep the history
},
});
  • 'none': No history is used. Each prompt is answered independently.
  • 'read-only': The agent can reference past messages but will not save new ones.
  • 'read-write': The agent reads and writes to history (default behavior).

The memoryId identifies a conversation. Using the same memoryId across requests continues the same conversation. Treat memory IDs with the same security as access tokens since they grant access to the chat history.

To retrieve past messages for a given conversation, use getChatHistory():

Client code
const messages = await squid
.ai()
.agent('banking-copilot')
.getChatHistory('user-123-session');

Response Format

Control the format of the agent's response using responseFormat:

Client code
// Get a JSON response
const json = await squid
.ai()
.agent('banking-copilot')
.ask('List our credit cards with their fees', {
responseFormat: 'json_object',
});

// Get a response that strictly conforms to a JSON schema (Anthropic models)
const structured = await squid
.ai()
.agent('banking-copilot')
.ask('Analyze the sentiment of this review', {
model: 'claude-sonnet-4-6',
responseFormat: {
type: 'json_schema',
schema: {
type: 'object',
properties: {
sentiment: { type: 'string', enum: ['positive', 'negative', 'neutral'] },
confidence: { type: 'number' },
},
required: ['sentiment', 'confidence'],
},
},
});

Available formats:

  • 'text' (default): Plain text response.
  • 'json_object': The model attempts to return valid JSON.
  • { type: 'json_schema', schema: ... }: Structured output that guarantees the response conforms to the provided JSON schema. Currently supported by Anthropic models.

Including Files in the Prompt

Pass images or documents as part of the prompt using fileUrls:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('What does this document say?', {
fileUrls: [
{
id: 'doc-1',
type: 'document',
purpose: 'context',
url: 'https://example.com/statement.pdf',
description: 'Customer bank statement',
},
],
});

Each file URL requires an id (unique per request), a type ('image' or 'document'), and a purpose:

  • 'context': The file is included directly in the prompt for the AI to reference.
  • 'tools': The file is returned as part of a tool/function call result.

Overriding the Model Per-Request

Override the agent's default model for a single request:

Client code
const response = await squid.ai().agent('banking-copilot').ask('Summarize this data', {
model: 'gpt-4o',
});

Additional Options

OptionTypeDefaultDescription
maxTokensnumberModel maxMaximum input tokens Squid can send to the model
maxOutputTokensnumber-Maximum tokens the model should generate
temperaturenumber0.5Sampling temperature (0-1)
timeoutMsnumber240000Request timeout in milliseconds
instructionsstring-Additional instructions appended to the agent's default instructions
guardrailsobject-Override guardrail settings per-request
disableContextbooleanfalseSkip knowledge base context for this request
includeReferencebooleanfalseInclude source references in the response
reasoningEffortstring-'minimal', 'low', 'medium', or 'high' (for reasoning models)
useCodeInterpreterstring'none''llm' to enable Python code execution (OpenAI and Gemini only)
executionPlanOptionsobject-Enable the agent to plan before acting

Filtering Knowledge base Context with Metadata

When you have added metadata to your context, you can use the contextMetadataFilterForKnowledgeBase chat option to instruct the AI agent to only consult specific contexts. Only contexts that meet the filter requirement will be used to respond to the client prompt.

The following example filters contexts to only include those with a metadata value of "company" that is equal to "Bank of America":

Client code
await squid
.ai()
.agent('banking-copilot')
.ask('Which Bank of America credit card is best for students?', {
contextMetadataFilterForKnowledgeBase: {
['banking-knowledgebase']: { company: { $eq: 'Bank of America' } },
},
});

The following metadata filters are supported:

FilterDescriptionSupported types
$eqMatches vectors with metadata values that are equal to a specified valuenumber, string, boolean
$neMatches vectors with metadata values that are not equal to a specified valuenumber, string, boolean
$gtMatches vectors with metadata values that are greater than a specified valuenumber
$gteMatches vectors with metadata values that are greater than or equal to a specified valuenumber
$ltMatches vectors with metadata values that are less than a specified valuenumber
$lteMatches vectors with metadata values that are less than or equal to a specified valuenumber
$inMatches vectors with metadata values that are in a specified arraystring, number
$ninMatches vectors with metadata values that are not in a specified arraystring, number
$existsMatches vectors with the specified metadata fieldboolean

AI Functions

Squid AI Agents can handle specific use cases and create more consistent responses using AI functions.

Adding Functions to an Agent

You can attach AI functions to an agent using setAgentOptionInPath(), which updates only the function list without affecting other agent configuration:

Client code
await squid
.ai()
.agent('banking-copilot')
.setAgentOptionInPath('functions', ['getCreditLimit']);

To update the function list, call setAgentOptionInPath() again with the new set of functions. Passing an empty array removes all functions. You can also use upsert() to set functions along with all other agent values in a single call, but note that upsert() replaces the entire agent configuration.

Passing Functions at Ask Time

Alternatively, pass AI function names per-request using the functions option. This overrides the agent's stored function list for that request:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('What is my current credit limit?', {
functions: ['getCreditLimit', 'getAccountBalance'],
});

To learn more about AI functions, view the documentation. To see an example application that uses AI functions, check out this AI agent tutorial.

Connected Agents

Agents can delegate tasks to other agents, enabling multi-agent workflows. A connected agent appears as a callable tool that the parent agent can invoke when it determines the sub-agent is best suited to handle a specific part of the user's request.

Configuring Connected Agents

Use updateConnectedAgents() to set the list of agents connected to this agent. The description tells the parent agent when to delegate to each connected agent:

Client code
await squid
.ai()
.agent('banking-copilot')
.updateConnectedAgents([
{
agentId: 'fraud-detection-agent',
description: 'Call this agent when the user asks about suspicious transactions or potential fraud',
},
]);

To disconnect all agents, pass an empty array:

Client code
await squid.ai().agent('banking-copilot').updateConnectedAgents([]);

Passing Connected Agents at Ask Time

You can also specify connected agents per-request, which overrides the stored configuration:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('Is this transaction suspicious?', {
connectedAgents: [
{
agentId: 'fraud-detection-agent',
description: 'Call this agent for fraud analysis',
},
{
agentId: 'compliance-agent',
description: 'Call this agent for regulatory compliance checks',
},
],
});

By default, nested agent calls can recurse up to 5 levels deep. You can adjust this using the quotas option:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('Analyze this portfolio', {
quotas: { maxAiCallStackSize: 3 },
});

Connected Integrations

Agents can connect to your data sources and external services, allowing them to query databases, call APIs, and interact with SaaS tools as part of answering a prompt.

Configuring Connected Integrations

Use setAgentOptionInPath() to give the agent access to connectors without affecting other agent configuration:

Client code
await squid
.ai()
.agent('banking-copilot')
.setAgentOptionInPath('connectedIntegrations', [
{
integrationId: 'my-postgres',
integrationType: 'postgres',
description: 'Use this database to look up customer account information',
},
]);

The description helps the agent understand when to use this integration. The integrationType must match the type of connector configured in the Squid Console.

To disconnect all integrations, call setAgentOptionInPath() with an empty array. You can also use upsert() to set connected integrations along with all other agent values in a single call, but note that upsert() replaces the entire agent configuration.

Passing Connected Integrations at Ask Time

Like connected agents, integrations can also be specified per-request:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('What are my recent transactions?', {
connectedIntegrations: [
{
integrationId: 'my-postgres',
integrationType: 'postgres',
description: 'Customer transaction database',
},
],
});

Execution Planning

For complex tasks involving multiple tools, connected agents, or integrations, you can enable execution planning. When enabled, the agent first creates a plan of what actions to take before executing them:

Client code
const response = await squid
.ai()
.agent('banking-copilot')
.ask('Compare our credit card offerings with competitor rates', {
executionPlanOptions: {
enabled: true,
reasoningEffort: 'high', // 'minimal' | 'low' | 'medium' | 'high'
allowClarificationQuestions: true, // Let the agent ask follow-up questions
},
});

You can optionally specify a different model for the planning step using the model field within executionPlanOptions.

Observing Status Updates

When an agent processes a complex request involving tool calls, connected agents, or integrations, you can observe real-time status updates via WebSocket:

Client code
const statusUpdates = squid.ai().agent('banking-copilot').observeStatusUpdates();

statusUpdates.subscribe({
next: (status) => {
console.log(`[${status.title}] ${status.body}`);
},
});

The returned Observable emits AiStatusMessage objects with title and body fields describing each step the agent takes.

Error Handling

Common Errors

ErrorCauseSolution
Agent not foundCalling delete(), get(), or other methods on a non-existent agent IDVerify the agent ID exists by calling get() first, or use upsert() to ensure the agent is created
Context not foundCalling deleteContext() or getContext() with a context ID that does not existUse listContexts() to verify the context ID before deleting or retrieving
Request timeoutThe agent takes longer than the configured timeoutMs (default: 4 minutes)Increase timeoutMs in the options, simplify the prompt, or reduce the number of connected tools
Embedding model cannot be modifiedAttempting to change the embeddingModel on an existing knowledge baseCreate a new knowledge base with the desired embedding model instead

Handling Errors in Streaming

When using chat(), errors are delivered through the Observable's error callback:

Client code
const stream = squid.ai().agent('banking-copilot').chat('Analyze this data');

stream.subscribe({
next: (response) => console.log(response),
error: (err) => {
console.error('Agent error:', err.message);
},
complete: () => console.log('Done'),
});

Best Practices

Instructions

  • Keep instructions concise and direct. Describe the agent's role and what it should (and should not) do.
  • Use instructions for behavioral rules (tone, scope, response style) rather than for factual content. Put factual content in knowledge bases instead.
  • Test changes to instructions using the Test chat feature in the Agent Studio before deploying.

Knowledge Bases

  • Use descriptive description values when connecting knowledge bases. The description is how the agent decides which knowledge base to consult, especially when multiple are connected.
  • Split large documents into focused knowledge bases by topic. This gives the agent better signal for choosing the right context.
  • Add metadata to your contexts to enable filtering at query time, reducing noise in responses.

Multi-Agent Workflows

  • Give each connected agent a clear, specific description. Vague descriptions lead to incorrect delegation.
  • Set quotas.maxAiCallStackSize to a reasonable limit to avoid runaway recursive calls between agents.
  • Use executionPlanOptions for complex multi-step tasks so the agent reasons about its approach before acting.

Performance

  • Use chat() for user-facing interactions where perceived speed matters. Streaming shows tokens as they arrive rather than waiting for the full response.
  • Set disableContext: true when knowledge base context is not needed for a request to reduce latency.
  • Use memoryOptions.memoryMode: 'none' for stateless, one-off requests that do not need conversation history.

Securing your Agent

Securing your data is vital when using the Squid Client SDK to create agents and enable chatting. The AI agent and the chats conducted with them may contain sensitive information, so it is crucial to restrict access and updates to prevent unauthorized usage or modification.

To learn about securing your AI agent, check out the Securing AI agents documentation.

Agent API Keys

Agent API Keys can provide a more granular level of security when calling Agent actions. View the Agent API Keys documentation for more details.