Optional
agentGlobal context passed to the agent and all AI functions of the agent.
Optional
chatA unique chat ID, if the same chat ID is used again and history is not disabled, it will continue the conversation.
Optional
connectedList of connected AI agents can be called by the current agent. Overrides the stored value.
Optional
connectedList of connected AI agents can be called by the current agent. Overrides the stored value.
Optional
contextA set of filters that will limit the context the AI can access.
Optional
disableWhether to disable the whole context for the request. Default to false.
Optional
disableDisables history for the agent, so each question is answered as if it were the first question in the conversation. Default to false.
Optional
functionsFunctions to expose to the AI. Either a function name or a name with an extra function context passed only to this function. The parameter values must be valid serializable JSON values. Overrides the stored value.
Optional
guardrailsPreset instruction options that can be toggled on
Optional
includeInclude metadata in the context
Optional
includeWhether to include references from the source context in the response. Default to false.
Optional
instructionsInstructions to include with the prompt.
Optional
maxThe maximum number of tokens to use when making the request to the AI model. Default to the max tokens the model can accept.
Optional
modelThe LLM model to use.
Optional
quotasCurrent budget for nested or recursive AI chat calls per single prompt.
Optional
responseThe format of the response from the AI model. Note that not all models support JSON format. Default to 'text'.
Optional
smoothWhether to response in a "smooth typing" way, beneficial when the chat result is displayed in a UI. Default to true.
Optional
temperatureThe temperature to use when sampling from the model. Default to 0.5.
Optional
voiceThe options to use for the response in voice.
The base AI agent chat options, should not be used directly.