OpenAI Compatible Embedding
Use any OpenAI-compatible embedding API for knowledge base indexing and context retrieval in your AI agents.
Overview
The OpenAI Compatible Embedding connector allows you to use any embedding model that implements the OpenAI Embeddings API format. This is useful when you want to use self-hosted embedding models or third-party embedding providers that expose an OpenAI-compatible endpoint.
Embeddings are vector representations of text that enable semantic search. Squid uses embeddings to index and retrieve relevant context from knowledge bases attached to your AI agents, allowing the agent to answer questions using your data.
Setting up the connector
To add an OpenAI Compatible Embedding connector, complete the following steps:
- Navigate to the Squid Console and select your application.
- Click the Connectors tab.
- Click Available Connectors and find the OpenAI Compatible Embedding connector. Then click Add Connector.
- Provide the following details:
- Connector ID: A unique ID of your choice (e.g.,
my-embeddings). - Base URL: The base URL of the OpenAI-compatible embedding API.
- API Key (optional): An API key for authentication, if required by the provider.
- Embedding Models: A JSON array defining the embedding models available through this connector. Each model requires the following fields:
| Field | Type | Description |
|---|---|---|
modelName | string | The embedding model identifier used in API calls |
displayName | string | A human-readable name for the model |
dimensions | number | The number of dimensions in the embedding vector. Must be 1024 or 1536 |
maxTokens | number | Maximum number of input tokens per request |
Example:
[
{
"modelName": "text-embedding-3-small",
"displayName": "Text Embedding 3 Small",
"dimensions": 1536,
"maxTokens": 8191
}
]
- Click Add Connector.
Using the connector
To use the embedding connector, specify it as the embeddingModel when creating a knowledge base. Pass an object with the connector ID, model name, and dimensions:
await squid
.ai()
.knowledgeBase('my-knowledgebase')
.upsertKnowledgeBase({
description: 'Product documentation and FAQs',
name: 'product-docs',
embeddingModel: {
integrationId: 'my-embeddings',
model: 'text-embedding-3-small',
dimensions: 1536,
},
chatModel: 'gpt-4o',
metadataFields: [],
});
The dimensions value must match the dimensions you configured for the model in the connector setup. Squid currently supports embedding dimensions of 1024 or 1536.
How embeddings are used
Once the embedding connector is set up and assigned to a knowledge base, Squid uses it for knowledge base operations:
- Indexing: When you add documents to a knowledge base, Squid converts the text into embedding vectors using this connector for storage and later retrieval.
- Retrieval: When a user asks an AI agent a question, Squid converts the question into an embedding and searches the knowledge base for the most semantically relevant content to include in the agent's context.
The embedding connector works alongside your chat model connector. While the chat model handles conversations, the embedding model handles the knowledge base indexing and retrieval behind the scenes.