Skip to content

Agent Operator

  • Multi-turn tool calling with Toolbudget parameter
  • Parallel tool execution capability
  • Turn table system for conversation event capture
  • Chain ID tracking for multi-turn conversations
  • Reasoning model support for thinking models with reasoning level par
  • Added Anthropic model support
  • Enhanced streaming tool detection across providers
  • Improved tool call deduplication and history management
  • Better error handling and logging levels
  • Force option for tool choice and auto-generated chain IDs
  • Enhanced tool loading robustness for backward compatibility

The Agent operator is the central component for managing interactions with Large Language Models (LLMs). It handles assembling prompts, sending requests (including images and audio), processing responses, managing conversation history, executing tools, and handling callbacks.

Agent Operator

  • Connect directly to multiple AI providers through LiteLLM integration
  • Send and receive text, images, and audio (with provider support)
  • Control LLM parameters (temperature, max tokens, etc.)
  • Access advanced features like streaming responses
  • Configurable output formats (conversation, table, parameters)
  • Enable dynamic AI tools for parameter control and custom functionality
  • Grab contextual information from other operators
  • Callback system for integrating with other components
Call Agent (Call) op('agent').par.Call Pulse

Pulse this parameter to initiate a call to the language model using the current settings.

Default:
False
Call on in1 Table Change (Onin1) op('agent').par.Onin1 Toggle

When enabled, the Agent will automatically call the LLM whenever the input table changes.

Default:
False
Use Streaming (Streaming) op('agent').par.Streaming Toggle

When enabled, responses are delivered in chunks as they are generated.

Default:
False
Update Table When Streaming (Streamingupdatetable) op('agent').par.Streamingupdatetable Toggle

When enabled, conversation table is updated as streaming chunks arrive.

Default:
False
Current Task (Taskcurrent) op('agent').par.Taskcurrent Str

Displays the current state of the Agent. Read-only parameter updated by the system.

Default:
"" (Empty String)
Timer (Timer) op('agent').par.Timer Float

Displays timing information for the last LLM call.

Default:
0.0
Active (Active) op('agent').par.Active Toggle

Indicates if the Agent is currently processing a request. Read-only parameter.

Default:
False
Cancel Current (Cancelcall) op('agent').par.Cancelcall Pulse

Pulse to cancel any currently active API call or tool execution.

Default:
False
Agent Role Definition / Info Header
System Message DAT (Systemmessagedat) op('agent').par.Systemmessagedat OP

The DAT containing the system message text to send to the LLM. This defines the agent's role/persona.

Default:
./system_message
System Message (Last) (Displaysysmess) op('agent').par.Displaysysmess Str

Displays the last system message that was sent. Read-only parameter.

Default:
"" (Empty String)
Editsysmess (Editsysmess) op('agent').par.Editsysmess Pulse

Pulse to open the system message DAT for editing.

Default:
False
Use System Message (Usesystemmessage) op('agent').par.Usesystemmessage Toggle

When disabled, system messages are not sent to the model (for models that do not support system messages).

Default:
False
Chain ID (Chainid) op('agent').par.Chainid Str

Chain ID for tracking calls in orchestration systems. When set, this will be used instead of auto-generating one.

Default:
"" (Empty String)

Understanding Model Selection

Operators utilizing LLMs (LOPs) offer flexible ways to configure the AI model used:

  • ChatTD Model (Default): By default, LOPs inherit model settings (API Server and Model) from the central ChatTD component. You can configure ChatTD via the "Controls" section in the Operator Create Dialog or its parameter page.
  • Custom Model: Select this option in "Use Model From" to override the ChatTD settings and specify the API Server and AI Model directly within this operator.
  • Controller Model: Choose this to have the LOP inherit its API Server and AI Model parameters from another operator (like a different Agent or any LOP with model parameters) specified in the Controller [ Model ] parameter. This allows centralizing model control.

The Search toggle filters the AI Model dropdown based on keywords entered in Model Search. The Show Model Info toggle (if available) displays detailed information about the selected model directly in the operator's viewer, including cost and token limits.

Output Settings Header
Max Tokens (Maxtokens) op('agent').par.Maxtokens Int

The maximum number of tokens the model should generate.

Default:
256
Temperature (Temperature) op('agent').par.Temperature Float

Controls randomness in the response. Lower values are more deterministic.

Default:
0
Model Selection Header
Use Model From (Modelselection) op('agent').par.Modelselection Menu

Choose where the model configuration comes from.

Default:
chattd_model
Options:
chattd_model, custom_model, controller_model
Controller [ Model ] (Modelcontroller) op('agent').par.Modelcontroller OP

Operator providing model settings when 'Use Model From' is set to controller_model.

Default:
None
Select API Server (Apiserver) op('agent').par.Apiserver StrMenu

Select the LiteLLM provider (API server).

Default:
openrouter
Menu Options:
  • openrouter (openrouter)
  • openai (openai)
  • groq (groq)
  • gemini (gemini)
  • ollama (ollama)
  • lmstudio (lmstudio)
  • custom (custom)
AI Model (Model) op('agent').par.Model StrMenu

Specific model to request. Available options depend on the selected provider.

Default:
llama-3.2-11b-vision-preview
Menu Options:
  • llama-3.2-11b-vision-preview (llama-3.2-11b-vision-preview)
Search (Search) op('agent').par.Search Toggle

Enable dynamic model search based on a pattern.

Default:
off
Options:
off, on
Model Search (Modelsearch) op('agent').par.Modelsearch Str

Pattern to filter models when Search is enabled.

Default:
"" (Empty String)
Show Model Info (Showmodelinfo) op('agent').par.Showmodelinfo Toggle

Displays detailed information about the selected model in the operator's viewer.

Default:
1
Options:
off, on
Use LOP Tools (Usetools) op('agent').par.Usetools Toggle

When enabled, the Agent can use tools defined below during its interactions.

Default:
False
Tool Follow-up Response (Toolfollowup) op('agent').par.Toolfollowup Toggle

When enabled, the agent makes a follow-up API call after tool execution to generate a final response. When disabled, the agent only executes tools without generating responses.

Default:
True
Tool Turn Budget (Toolturnbudget) op('agent').par.Toolturnbudget Int

Maximum number of tool turns the agent may take (initial tool turn counts as 1). Only applies when Allow Follow-up Tools is enabled.

Default:
1
Parallel Tool Calls (Paralleltoolcalls) op('agent').par.Paralleltoolcalls Toggle

If enabled and tools are present, request parallel tool calls (LiteLLM parallel_tool_calls). Default Off.

Default:
False
LOP Tools Header
External Op Tools (Tool) op('agent').par.Tool Sequence

Sequence parameter that controls groups of external tools.

Default:
0
Active (Tool0active) op('agent').par.Tool0active Menu

Sets whether this tool is available to the Agent: enabled (optional), disabled (off), or forced (must be used).

Default:
enabled
OP (Tool0op) op('agent').par.Tool0op OP

The operator that provides the tool functionality (must have a compatible extension with GetTool method).

Default:
"" (Empty String)
Context Op (Contextop) op('agent').par.Contextop OP

Optional operator that can provide additional context to include in the prompt (must have GrabOpContextEXT extension).

Default:
"" (Empty String)
Use Audio (Useaudio) op('agent').par.Useaudio Toggle

When enabled, the specified audio file will be included in the prompt (for providers supporting audio input).

Default:
False
Audio File (Audiofile) op('agent').par.Audiofile File

Path to the audio file to include in the prompt.

Default:
"" (Empty String)
Send TOP Image (Sendtopimage) op('agent').par.Sendtopimage Toggle

If enabled, send the TOP specified in Topimage directly with the prompt.

Default:
False
TOP Image (Topimage) op('agent').par.Topimage TOP

Specify a TOP operator to send as an image.

Default:
"" (Empty String)
Enable Prompt Caching (Enablepromptcaching) op('agent').par.Enablepromptcaching Toggle

Enable prompt caching for supported providers to reduce costs and improve performance.

Default:
False
Output Mode (Outputmode) op('agent').par.Outputmode Menu

Determines how the LLM response is processed: conversation (update conversation table), table (output to DAT), parameter (set TD parameters), custom (custom handler).

Default:
conversation
Jsonmode (Jsonmode) op('agent').par.Jsonmode Toggle

When enabled, the agent will format the response as JSON.

Default:
False
Thinking Filter Mode (Thinkingfilter) op('agent').par.Thinkingfilter Menu

Filters out 'thinking' text from the response. 'Filter Conversation & Display' filters both the conversation history and the final output. 'Filter Conversation Only' filters only the conversation history. 'Filter Display (out2)' filters only the final output.

Default:
none
Thinking Replacement Text (Thinkingreplace) op('agent').par.Thinkingreplace Str

The text to replace the 'thinking' text with.

Default:
"" (Empty String)
Thinking Phrases (Thinkingphrases) op('agent').par.Thinkingphrases Str

The start and end phrases that denote 'thinking' text.

Default:
<think>,</think>
Assign Perspective (Perspective) op('agent').par.Perspective Menu

Sets how to interpret roles in the conversation: assistant (normal), user (swap user/assistant roles), or third_party (combine all messages as one).

Default:
user
Conversation Format (Conversationformat) op('agent').par.Conversationformat Menu

Controls how to format the conversation history.

Default:
input_roles
Op Display Header
Icon (Icon) op('agent').par.Icon Menu

Show/hide the icon in the viewer.

Default:
none
Display Text (Displaytext) op('agent').par.Displaytext Toggle

Show/hide the text display in the viewer.

Default:
False
Table (Tableview) op('agent').par.Tableview Toggle

Show/hide the table view in the viewer.

Default:
False
Show Metadata (Showmetadata) op('agent').par.Showmetadata Toggle

Show/hide the metadata table in the viewer.

Default:
False
Tool Format (Toolformat) op('agent').par.Toolformat Menu

Controls how tool calls are displayed in the conversation.

Default:
original
Callbacks Header
Callback DAT (Callbackdat) op('agent').par.Callbackdat DAT

The DAT containing callback functions that respond to Agent events.

Default:
ChatTD_callbacks
Edit Callbacks (Editcallbacksscript) op('agent').par.Editcallbacksscript Pulse

Pulse to open the callback DAT for editing.

Default:
False
Create Callbacks (Createpulse) op('agent').par.Createpulse Pulse

Pulse to create a new callback DAT if one doesn't exist.

Default:
False
onTaskStart (Ontaskstart) op('agent').par.Ontaskstart Toggle

Enable/disable the onTaskStart callback, triggered when a new request begins.

Default:
False
onTaskComplete (Ontaskcomplete) op('agent').par.Ontaskcomplete Toggle

Enable/disable the onTaskComplete callback, triggered when a request completes successfully.

Default:
False
On Tool Call (Ontoolcall) op('agent').par.Ontoolcall Toggle

Enable/disable the onToolCall callback, triggered when a tool is called.

Default:
False
Ontaskerror (Ontaskerror) op('agent').par.Ontaskerror Toggle

Enable/disable the onTaskError callback, triggered when a request encounters an error.

Default:
False
Textport Debug Callbacks (Debugcallbacks) op('agent').par.Debugcallbacks Menu

Controls the level of callback debugging information printed to the textport.

Default:
Full Details
ChatTD (Chattd) op('agent').par.Chattd OP

Reference to the ChatTD operator for configuration.

Default:
"" (Empty String)
Show Built In Pars (Showbuiltin) op('agent').par.Showbuiltin Toggle

Show built-in TouchDesigner parameters.

Default:
False
Version (Version) op('agent').par.Version Str

Current version of the operator.

Default:
"" (Empty String)
Last Updated (Lastupdated) op('agent').par.Lastupdated Str

Date of last update.

Default:
"" (Empty String)
Website (Website) op('agent').par.Website Str

Related website or documentation.

Default:
"" (Empty String)
Creator (Creator) op('agent').par.Creator Str

Operator creator.

Default:
"" (Empty String)
Show Logs (Showlogs) op('agent').par.Showlogs Menu

Controls the level of logging information printed to the textport.

Default:
Basic
Clear Log (Clearlog) op('agent').par.Clearlog Pulse

Pulse to clear the logs.

Default:
False
Bypass (Bypass) op('agent').par.Bypass Toggle

Bypass the operator.

Default:
False

The Agent operator provides several callbacks that allow you to react to different stages of its operation. Define corresponding functions in the DAT specified by the Callbackdat parameter.

Available Callbacks:
  • onTaskStart
  • onTaskComplete
  • onTaskError
  • onToolCall
Example Callback Structure:
# Callback methods
def onTaskStart(info):
  """Called when a new task begins processing."""
  # Access information about the task, model, etc.
  # Example: model = info.get('model')
  pass
def onTaskComplete(info):
  """Called when a task is finished processing."""
  # Access the response, tokens used, etc.
  # Example: response = info.get('response')
  pass
def onTaskError(info):
  """Called when a task encounters an error."""
  # Access error details
  # Example: error = info.get('error')
  pass
def onToolCall(info):
  """Called when a tool is called by the agent."""
  # Access the tool call information
  # Example: tool_calls = info.get('tool_calls')
  pass
  1. Create an agent operator.
  2. Create a tableDAT and connect it to the first input of the agent.
  3. In the tableDAT, add a row with the role “user” and your message in the “message” column.
  4. Pulse the “Call Agent” parameter on the “Agent” page of the agent operator.
  5. The agent’s response will be added to the conversation_dat table inside the agent operator.
  1. Create a context_grabber operator.
  2. Connect the context_grabber to the Context Op parameter on the “Context” page of the agent operator.
  3. Configure the context_grabber to grab the desired context (e.g., an image from a TOP operator).
  4. In your input table, ask the agent a question about the context (e.g., “Describe the image.”).
  5. Pulse the “Call Agent” parameter. The agent will use the context provided by the context_grabber to answer your question.
  1. Enable “Use LOP Tools” on the “Tools” page of the agent operator.
  2. Connect a tool operator (e.g., a tool_dat with a Python script) to the “External Op Tools” parameter.
  3. In your input table, ask the agent to perform a task that requires the tool.
  4. Pulse the “Call Agent” parameter. The agent will execute the tool and use the result to respond.

If an API call is taking too long or was initiated by mistake, you can cancel it by pulsing the Cancel Current parameter on the “Agent” page. This will stop the current task and reset the agent’s status to “Idle”.

Some models may output “thinking” tags (e.g., <think>...</think>) that you don’t want to display to the user. The Thinking Filter Mode parameter on the “I/O” page can be used to remove these tags from the conversation history, the final output, or both.

  1. Set the Thinking Filter Mode to the desired level of filtering.
  2. If your model uses different tags, you can specify them in the Thinking Phrases parameter.
  3. You can also provide replacement text in the Thinking Replacement Text parameter.