@squidcloud/client
    Preparing search index...

    Interface BaseAiChatOptions

    The base AI agent chat options, should not be used directly.

    interface BaseAiChatOptions {
        agentContext?: Record<string, unknown>;
        chatId?: string;
        connectedAgents?: AiConnectedAgentMetadata[];
        connectedIntegrations?: AiConnectedIntegrationMetadata<unknown>[];
        connectedKnowledgeBases?: AiConnectedKnowledgeBaseMetadata[];
        contextMetadataFilter?: AiContextMetadataFilter;
        contextMetadataFilterForKnowledgeBase?: Record<
            string,
            AiContextMetadataFilter,
        >;
        disableContext?: boolean;
        disableHistory?: boolean;
        enablePromptRewriteForRag?: boolean;
        executionPlanOptions?: AiAgentExecutionPlanOptions;
        fileUrls?: AiFileUrl[];
        functions?: (string | AiFunctionIdWithContext)[];
        guardrails?: GuardrailsOptions;
        includeMetadata?: boolean;
        includeReference?: boolean;
        instructions?: string;
        maxOutputTokens?: number;
        maxTokens?: number;
        memoryOptions?: AiAgentMemoryOptions;
        model?: string;
        quotas?: AiChatPromptQuotas;
        reasoningEffort?: AiReasoningEffort;
        rerankProvider?: "cohere" | "none";
        responseFormat?: AiAgentResponseFormat;
        smoothTyping?: boolean;
        temperature?: number;
        verbosity?: AiVerbosityLevel;
        voiceOptions?: OpenAiCreateSpeechOptions;
    }

    Hierarchy (View Summary)

    Index

    Properties

    agentContext?: Record<string, unknown>

    Global context passed to the agent and all AI functions of the agent.

    chatId?: string

    A unique chat ID, if the same chat ID is used again and history is not disabled, it will continue the conversation.

    use memoryOptions instead. @internal.

    connectedAgents?: AiConnectedAgentMetadata[]

    List of connected AI agents can be called by the current agent. Overrides the stored value.

    connectedIntegrations?: AiConnectedIntegrationMetadata<unknown>[]

    List of connected AI agents can be called by the current agent. Overrides the stored value.

    connectedKnowledgeBases?: AiConnectedKnowledgeBaseMetadata[]

    List of connected AiKnowlegeBases that can be called by the current agent

    contextMetadataFilter?: AiContextMetadataFilter

    A set of filters that will limit the context the AI can access.

    use contextMetadataFilterForKnowledgeBase instead.

    contextMetadataFilterForKnowledgeBase?: Record<string, AiContextMetadataFilter>

    A set of filters that will limit the context the AI can access.

    disableContext?: boolean

    Whether to disable the whole context for the request. Default to false.

    disableHistory?: boolean

    Disables history for the agent, so each question is answered as if it were the first question in the conversation. Default to false.

    use memoryOptions instead. @internal.

    enablePromptRewriteForRag?: boolean

    Rewrite prompt for RAG - defaults to false

    executionPlanOptions?: AiAgentExecutionPlanOptions

    Options for AI agent execution plan, allowing the agent to perform an execution plan before invoking connected agents, connected integrations, or functions.

    fileUrls?: AiFileUrl[]

    An array of file URLs to include in the chat context.

    functions?: (string | AiFunctionIdWithContext)[]

    Functions to expose to the AI. Either a function name or a name with an extra function context passed only to this function. The parameter values must be valid serializable JSON values. Overrides the stored value.

    guardrails?: GuardrailsOptions

    Preset instruction options that can be toggled on

    includeMetadata?: boolean

    Include metadata in the context

    includeReference?: boolean

    Whether to include references from the source context in the response. Default to false.

    instructions?: string

    Instructions to include with the prompt.

    maxOutputTokens?: number

    The maximum number of tokens the model should output. Passed directly to the AI model. Can be used to control the output verbosity.

    maxTokens?: number

    The maximum number of input tokens that Squid can use when making the request to the AI model. Defaults to the max tokens the model can accept.

    memoryOptions?: AiAgentMemoryOptions

    The context ID to use for the request. If not provided, the agent's default context will be used.

    model?: string

    The LLM model to use.

    Current budget for nested or recursive AI chat calls per single prompt.

    reasoningEffort?: AiReasoningEffort

    The level of reasoning effort to apply; defaults to model-specific value. Effective only for models with reasoning.

    rerankProvider?: "cohere" | "none"

    Which provider's reranker to use for reranking the context. Defaults to 'cohere'.

    responseFormat?: AiAgentResponseFormat

    The format of the response from the AI model. Note that not all models support JSON format. Default to 'text'.

    smoothTyping?: boolean

    Whether to response in a "smooth typing" way, beneficial when the chat result is displayed in a UI. Default to true.

    temperature?: number

    The temperature to use when sampling from the model. Default to 0.5.

    verbosity?: AiVerbosityLevel

    Controls response length and detail level. Use low for brief responses, medium for balanced detail, or high for comprehensive explanations. Default: 'medium'.

    Note: this parameter is only supported by OpenAI plain text responses and is ignored for others. For other providers ask about verbosity in prompt and using maxOutputTokens.

    The options to use for the response in voice.