@squidcloud/client
    Preparing search index...

    Interface OpenAiReasoningChatOptions

    Chat options for OpenAI reasoning models, extending OpenAI options.

    interface OpenAiReasoningChatOptions {
        agentContext?: Record<string, unknown>;
        chatId?: string;
        connectedAgents?: AiConnectedAgentMetadata[];
        connectedIntegrations?: AiConnectedIntegrationMetadata[];
        contextMetadataFilter?: AiContextMetadataFilter;
        disableContext?: boolean;
        disableHistory?: boolean;
        fileUrls?: AiFileUrl[];
        functions?: (string | FunctionNameWithContext)[];
        guardrails?: GuardrailsOptions;
        includeMetadata?: boolean;
        includeReference?: boolean;
        instructions?: string;
        maxTokens?: number;
        model?: "o1" | "o1-mini" | "o3" | "o3-mini" | "o4-mini";
        quotas?: AiChatPromptQuotas;
        reasoningEffort?: OpenAiReasoningEffort;
        responseFormat?: AiAgentResponseFormat;
        smoothTyping?: boolean;
        temperature?: number;
        voiceOptions?: OpenAiCreateSpeechOptions;
    }

    Hierarchy (View Summary)

    Index

    Properties

    agentContext?: Record<string, unknown>

    Global context passed to the agent and all AI functions of the agent.

    chatId?: string

    A unique chat ID, if the same chat ID is used again and history is not disabled, it will continue the conversation.

    connectedAgents?: AiConnectedAgentMetadata[]

    List of connected AI agents can be called by the current agent. Overrides the stored value.

    connectedIntegrations?: AiConnectedIntegrationMetadata[]

    List of connected AI agents can be called by the current agent. Overrides the stored value.

    contextMetadataFilter?: AiContextMetadataFilter

    A set of filters that will limit the context the AI can access.

    disableContext?: boolean

    Whether to disable the whole context for the request. Default to false.

    disableHistory?: boolean

    Disables history for the agent, so each question is answered as if it were the first question in the conversation. Default to false.

    fileUrls?: AiFileUrl[]

    An array of file URLs to include in the chat context.

    functions?: (string | FunctionNameWithContext)[]

    Functions to expose to the AI. Either a function name or a name with an extra function context passed only to this function. The parameter values must be valid serializable JSON values. Overrides the stored value.

    guardrails?: GuardrailsOptions

    Preset instruction options that can be toggled on

    includeMetadata?: boolean

    Include metadata in the context

    includeReference?: boolean

    Whether to include references from the source context in the response. Default to false.

    instructions?: string

    Instructions to include with the prompt.

    maxTokens?: number

    The maximum number of tokens to use when making the request to the AI model. Default to the max tokens the model can accept.

    model?: "o1" | "o1-mini" | "o3" | "o3-mini" | "o4-mini"

    The OpenAI reasoning model to use for the chat.

    Current budget for nested or recursive AI chat calls per single prompt.

    reasoningEffort?: OpenAiReasoningEffort

    The level of reasoning effort to apply; defaults to model-specific value.

    responseFormat?: AiAgentResponseFormat

    The format of the response from the AI model. Note that not all models support JSON format. Default to 'text'.

    smoothTyping?: boolean

    Whether to response in a "smooth typing" way, beneficial when the chat result is displayed in a UI. Default to true.

    temperature?: number

    The temperature to use when sampling from the model. Default to 0.5.

    The options to use for the response in voice.