Skip to content

Chat Operator

The Chat LOP operator facilitates the creation, editing, and management of conversations with AI models within TouchDesigner. It allows users to define the roles (user, assistant, system) and content of messages, control the flow of the conversation, and integrate with other LOPs for advanced AI workflows. This operator is particularly useful for prototyping conversational AI agents, setting up example conversations, and enforcing specific patterns for the LLM to follow.

  • Requires the ChatTD LOP to be present in the network, as it handles the actual API calls to the AI model.
  • No specific Python dependencies beyond those required by TouchDesigner and the ChatTD LOP.
  • Input Table (DAT, optional): A table DAT containing pre-existing conversation data with columns for role, message, id, and timestamp. Allows loading conversations from external sources.
  • Conversation DAT: A table DAT named conversation_dat that stores the current state of the conversation, including roles, messages, IDs, and timestamps.
Active (Active) op('chat').par.Active Toggle
Default:
Off
Call Assistant (Callassistant) op('chat').par.Callassistant Pulse
Default:
None
Call User (Calluser) op('chat').par.Calluser Pulse
Default:
None
Input Handling (Inputhandling) op('chat').par.Inputhandling Menu
Default:
prepend
Options:
prepend, append, index, none
Insert Index (Insertindex) op('chat').par.Insertindex Integer
Default:
1
Message (Message) op('chat').par.Message Sequence
Default:
None
Role (Msg 0) (Message0role) op('chat').par.Message0role Menu
Default:
system
Options:
user, assistant, system
Text (Msg 0) (Message0text) op('chat').par.Message0text String
Default:
None
Role (Msg 1) (Message1role) op('chat').par.Message1role Menu
Default:
user
Options:
user, assistant, system
Text (Msg 1) (Message1text) op('chat').par.Message1text String
Default:
None

Understanding Model Selection

Operators utilizing LLMs (LOPs) offer flexible ways to configure the AI model used:

  • ChatTD Model (Default): By default, LOPs inherit model settings (API Server and Model) from the central ChatTD component. You can configure ChatTD via the "Controls" section in the Operator Create Dialog or its parameter page.
  • Custom Model: Select this option in "Use Model From" to override the ChatTD settings and specify the API Server and AI Model directly within this operator.
  • Controller Model: Choose this to have the LOP inherit its API Server and AI Model parameters from another operator (like a different Agent or any LOP with model parameters) specified in the Controller [ Model ] parameter. This allows centralizing model control.

The Search toggle filters the AI Model dropdown based on keywords entered in Model Search. The Show Model Info toggle (if available) displays detailed information about the selected model directly in the operator's viewer, including cost and token limits.

Output Settings Header
Max Tokens (Maxtokens) op('chat').par.Maxtokens Int

The maximum number of tokens the model should generate.

Default:
256
Temperature (Temperature) op('chat').par.Temperature Float

Controls randomness in the response. Lower values are more deterministic.

Default:
0
Model Selection Header
Use Model From (Modelselection) op('chat').par.Modelselection Menu

Choose where the model configuration comes from.

Default:
custom_model
Options:
chattd_model, custom_model, controller_model
Controller [ Model ] (Modelcontroller) op('chat').par.Modelcontroller OP

Operator providing model settings when 'Use Model From' is set to controller_model.

Default:
None
Select API Server (Apiserver) op('chat').par.Apiserver StrMenu

Select the LiteLLM provider (API server).

Default:
groq
Menu Options:
  • openrouter (openrouter)
  • openai (openai)
  • groq (groq)
  • gemini (gemini)
  • ollama (ollama)
  • lmstudio (lmstudio)
  • custom (custom)
AI Model (Model) op('chat').par.Model StrMenu

Specific model to request. Available options depend on the selected provider.

Default:
qwen-2.5-32b
Menu Options:
  • qwen-2.5-32b (qwen-2.5-32b)
Search (Search) op('chat').par.Search Toggle

Enable dynamic model search based on a pattern.

Default:
off
Options:
off, on
Model Search (Modelsearch) op('chat').par.Modelsearch Str

Pattern to filter models when Search is enabled.

Default:
"" (Empty String)
Use System Message (Usesystemmessage) op('chat').par.Usesystemmessage Toggle
Default:
On
System Message (Systemmessage) op('chat').par.Systemmessage String
Default:
helpful TD assistant
Use User Prompt (Useuserprompt) op('chat').par.Useuserprompt Toggle
Default:
On
User Prompt (Userprompt) op('chat').par.Userprompt String
Default:
pretend to be the user
Clear Conversation (Clearconversation) op('chat').par.Clearconversation Pulse
Default:
None
Load from Input (Loadfrominput) op('chat').par.Loadfrominput Pulse
Default:
None
Conversation ID (Conversationid) op('chat').par.Conversationid String
Default:
chat1
Callbacks Header
Callback DAT (Callbackdat) op('chat').par.Callbackdat DAT
Default:
None
Edit Callbacks (Editcallbacksscript) op('chat').par.Editcallbacksscript Pulse
Default:
None
Create Callbacks (Createpulse) op('chat').par.Createpulse Pulse
Default:
None
onTaskStart (Ontaskstart) op('chat').par.Ontaskstart Toggle
Default:
Off
onTaskComplete (Ontaskcomplete) op('chat').par.Ontaskcomplete Toggle
Default:
Off
Textport Debug Callbacks (Debugcallbacks) op('chat').par.Debugcallbacks Menu
Default:
None
Options:
None, Basic, Detailed
Show Built In Pars (Showbuiltin) op('chat').par.Showbuiltin Toggle
Default:
Off
Bypass (Bypass) op('chat').par.Bypass Toggle
Default:
Off
Chattd (Chattd) op('chat').par.Chattd OP
Default:
/dot_lops/ChatTD
Version (Version) op('chat').par.Version String
Default:
1.0.0
Last Updated (Lastupdated) op('chat').par.Lastupdated String
Default:
2024-11-06
Creator (Creator) op('chat').par.Creator String
Default:
dotsimulate
Website (Website) op('chat').par.Website String
Default:
https://dotsimulate.com
Available Callbacks:
  • onTaskStart
  • onTaskComplete
Example Callback Structure:
def onTaskStart(info):
# Called when a call to the assistant or user begins
# info dictionary contains details like op, callType
pass

def onTaskComplete(info):
# Called when the AI response is received and processed
# info dictionary contains details like op, result, conversationID
pass
  • Long conversations (many message blocks) can increase processing time.
  • Higher temperature settings might slightly increase AI response time.
  • Ensure the linked ChatTD operator is configured correctly.
# Get the operator
chat_op = op('chat1')
# Configure messages (assuming default 5 blocks)
chat_op.par.Message0role = 'system'
chat_op.par.Message0text = 'You are a pirate.'
chat_op.par.Message1role = 'user'
chat_op.par.Message1text = 'Hello there!'
chat_op.par.Message2role = 'assistant'
chat_op.par.Message2text = '' # Assistant will fill this
# Call the assistant
chat_op.par.Callassistant.pulse()
# Check the output DAT
conversation_log = chat_op.op('conversation_dat')
print(conversation_log.rows()[-1]) # Print the last message (assistant's response)
# Assume 'conv_table' DAT exists with role, message columns
# Connect table
chat_op = op('chat1')
chat_op.inputConnectors[0].connect(op('conv_table'))
# Load data
chat_op.par.Inputhandling = 'none' # Important if you want exact load
chat_op.par.Loadfrominput.pulse()
  • Prototyping conversational AI agents.
  • Building example conversations for training or demos.
  • Enforcing specific response patterns or personas.
  • Integrating chat interfaces into installations.
  • Creating dynamic user experiences.