Interface: AiAgentChatOptions
The options for the AI chat method.
Properties
agentContext
• Optional
agentContext: Record
<string
, unknown
>
Global context passed to the agent and all AI functions of the agent.
chatId
• Optional
chatId: string
A unique chat ID, if the same chat ID is used again and history is not disabled, it will continue the conversation.
connectedAgents
• Optional
connectedAgents: AiConnectedAgentMetadata
[]
List of connected AI agents can be called by the current agent. Overrides the stored value.
connectedIntegrations
• Optional
connectedIntegrations: AiConnectedIntegrationMetadata
[]
List of connected AI agents can be called by the current agent. Overrides the stored value.
contextMetadataFilter
• Optional
contextMetadataFilter: AiContextMetadataFilter
A set of filters that will limit the context the AI can access.
disableContext
• Optional
disableContext: boolean
Whether to disable the whole context for the request. Default to false.
disableHistory
• Optional
disableHistory: boolean
Disables history for the agent, so each question is answered as if it were the first question in the conversation. Default to false.
fileUrls
• Optional
fileUrls: AiFileUrl
[]
File URLs (only images supported at the moment).
functions
• Optional
functions: (string
| FunctionNameWithContext
)[]
Functions to expose to the AI. Either a function name or a name with an extra function context passed only to this function. The parameter values must be valid serializable JSON values. Overrides the stored value.
groundingWithWebSearch
• Optional
groundingWithWebSearch: boolean
Enables grounding with real-time web search to enhance AI responses with up-to-date information. Currently supported only for gemini-2.0-flash.
includeMetadata
• Optional
includeMetadata: boolean
Include metadata in the context
includeReference
• Optional
includeReference: boolean
Whether to include references from the source context in the response. Default to false.
instructions
• Optional
instructions: string
[]
A list of instructions to include with the prompt.
maxTokens
• Optional
maxTokens: number
The maximum number of tokens to use when making the request to the AI model. Default to the max tokens the model can accept.
overrideModel
• Optional
overrideModel: "gpt-4o"
| "gpt-4o-mini"
| "o1"
| "o1-mini"
| "o3-mini"
| "gemini-1.5-pro"
| "gemini-2.0-flash"
| "claude-3-5-haiku-latest"
| "claude-3-5-sonnet-latest"
The model to use for this chat. If not provided, the profile model will be used.
quotas
• Optional
quotas: AiChatPromptQuotas
Current budget for nested or recursive AI chat calls per single prompt.
reasoningEffort
• Optional
reasoningEffort: "low"
| "medium"
| "high"
Constrains effort on reasoning for reasoning models. o1 models only.
responseFormat
• Optional
responseFormat: OpenAiResponseFormat
The format of the response from the AI model. Note that not all models support JSON format. Default to 'text'.
smoothTyping
• Optional
smoothTyping: boolean
Whether to response in a "smooth typing" way, beneficial when the chat result is displayed in a UI. Default to true.
temperature
• Optional
temperature: number
The temperature to use when sampling from the model. Default to 0.5.
topP
• Optional
topP: number
The top P value to use when sampling from the model. Default to 1.
voiceOptions
• Optional
voiceOptions: OpenAiCreateSpeechOptions
The options to use for the response in voice.