Optional
agentGlobal context passed to the agent and all AI functions of the agent.
Optional
chatA unique chat ID, if the same chat ID is used again and history is not disabled, it will continue the conversation.
Optional
connectedList of connected AI agents can be called by the current agent. Overrides the stored value.
Optional
connectedList of connected AI agents can be called by the current agent. Overrides the stored value.
Optional
connectedList of connected AiKnowlegeBases that can be called by the current agent
Optional
contextA set of filters that will limit the context the AI can access.
Optional
contextA set of filters that will limit the context the AI can access.
Optional
disableWhether to disable the whole context for the request. Default to false.
Optional
disableDisables history for the agent, so each question is answered as if it were the first question in the conversation. Default to false.
Optional
enableRewrite prompt for RAG - defaults to false
Optional
executionOptions for AI agent execution plan, allowing the agent to perform an execution plan before invoking connected agents, connected integrations, or functions.
Optional
fileAn array of file URLs to include in the chat context.
Optional
functionsFunctions to expose to the AI. Either a function name or a name with an extra function context passed only to this function. The parameter values must be valid serializable JSON values. Overrides the stored value.
Optional
guardrailsPreset instruction options that can be toggled on
Optional
includeInclude metadata in the context
Optional
includeWhether to include references from the source context in the response. Default to false.
Optional
instructionsInstructions to include with the prompt.
Optional
maxThe maximum number of tokens the model should output. Passed directly to the AI model. Can be used to control the output verbosity.
Optional
maxThe maximum number of input tokens that Squid can use when making the request to the AI model. Defaults to the max tokens the model can accept.
Optional
memoryThe context ID to use for the request. If not provided, the agent's default context will be used.
Optional
modelThe LLM model to use.
Optional
quotasCurrent budget for nested or recursive AI chat calls per single prompt.
Optional
reasoningThe level of reasoning effort to apply; defaults to model-specific value. Effective only for models with reasoning.
Optional
rerankWhich provider's reranker to use for reranking the context. Defaults to 'cohere'.
Optional
responseThe format of the response from the AI model. Note that not all models support JSON format. Default to 'text'.
Optional
smoothWhether to response in a "smooth typing" way, beneficial when the chat result is displayed in a UI. Default to true.
Optional
temperatureThe temperature to use when sampling from the model. Default to 0.5.
Optional
verbosityControls response length and detail level.
Use low
for brief responses, medium
for balanced detail, or high
for comprehensive explanations.
Default: 'medium'.
Note: this parameter is only supported by OpenAI plain text responses and is ignored for others.
For other providers ask about verbosity in prompt and using maxOutputTokens
.
Optional
voiceThe options to use for the response in voice.
The base AI agent chat options, should not be used directly.