OptionalagentGlobal context passed to the agent and all AI functions of the agent.
OptionalchatA unique chat ID, if the same chat ID is used again and history is not disabled, it will continue the conversation.
OptionalconnectedList of connected AI agents can be called by the current agent. Overrides the stored value.
OptionalconnectedList of connected AI agents can be called by the current agent. Overrides the stored value.
OptionalconnectedList of connected AiKnowlegeBases that can be called by the current agent
OptionalcontextA set of filters that will limit the context the AI can access.
OptionalcontextA set of filters that will limit the context the AI can access.
OptionaldisableWhether to disable the whole context for the request. Default to false.
OptionaldisableDisables history for the agent, so each question is answered as if it were the first question in the conversation. Default to false.
OptionalenableRewrite prompt for RAG - defaults to false
OptionalexecutionOptions for AI agent execution plan, allowing the agent to perform an execution plan before invoking connected agents, connected integrations, or functions.
OptionalfileAn array of file URLs to include in the chat context.
OptionalfunctionsFunctions to expose to the AI. Either a function name or a name with an extra function context passed only to this function. The parameter values must be valid serializable JSON values. Overrides the stored value.
OptionalguardrailsPreset instruction options that can be toggled on
OptionalincludeInclude metadata in the context
OptionalincludeWhether to include references from the source context in the response. Default to false.
OptionalinstructionsInstructions to include with the prompt.
OptionalmaxThe maximum number of tokens the model should output. Passed directly to the AI model. Can be used to control the output verbosity.
OptionalmaxThe maximum number of input tokens that Squid can use when making the request to the AI model. Defaults to the max tokens the model can accept.
OptionalmemoryThe context ID to use for the request. If not provided, the agent's default context will be used.
LLM Model to use for the chat query.
OptionalquotasCurrent budget for nested or recursive AI chat calls per single prompt.
OptionalreasoningThe level of reasoning effort to apply; defaults to model-specific value. Effective only for models with reasoning.
OptionalrerankWhich provider's reranker to use for reranking the context. Defaults to 'cohere'.
OptionalresponseThe format of the response from the AI model. Note that not all models support JSON format. Default to 'text'.
OptionalsmoothWhether to response in a "smooth typing" way, beneficial when the chat result is displayed in a UI. Default to true.
OptionaltemperatureThe temperature to use when sampling from the model. Default to 0.5.
OptionaluseEnable LLMs built-in code interpreter for executing Python code.
Note: Only supported by OpenAI and Gemini models. Ignored for other providers.
OptionalverbosityControls response length and detail level.
Use low for brief responses, medium for balanced detail, or high for comprehensive explanations.
Default: 'medium'.
Note: this parameter is only supported by OpenAI plain text responses and is ignored for others.
For other providers ask about verbosity in prompt and using maxOutputTokens.
OptionalvoiceThe options to use for the response in voice.
Options for user-provided LLM models.