Skip to content

Chat Operator

The Chat LOP operator facilitates the creation, editing, and management of conversations with AI models within TouchDesigner. It allows users to define the roles (user, assistant, system) and content of messages, control the flow of the conversation, and integrate with other LOPs for advanced AI workflows. This operator is particularly useful for prototyping conversational AI agents, setting up example conversations, and enforcing specific patterns for the LLM to follow.

  • Requires the ChatTD LOP to be present in the network, as it handles the actual API calls to the AI model.
  • No specific Python dependencies beyond those required by TouchDesigner and the ChatTD LOP.
  • Input Table (DAT, optional): A table DAT containing pre-existing conversation data with columns for role, message, id, and timestamp. Allows loading conversations from external sources.
  • Conversation DAT: A table DAT named conversation_dat that stores the current state of the conversation, including roles, messages, IDs, and timestamps.
Active (Active) op('chat').par.Active Toggle
Default:
Off
Call Assistant (Callassistant) op('chat').par.Callassistant Pulse
Default:
None
Call User (Calluser) op('chat').par.Calluser Pulse
Default:
None
Input Handling (Inputhandling) op('chat').par.Inputhandling Menu
Default:
prepend
Options:
prepend, append, index, none
Insert Index (Insertindex) op('chat').par.Insertindex Integer
Default:
1
Message (Message) op('chat').par.Message Sequence
Default:
None
Role (Msg 0) (Message0role) op('chat').par.Message0role Menu
Default:
system
Options:
user, assistant, system
Text (Msg 0) (Message0text) op('chat').par.Message0text String
Default:
None
Role (Msg 1) (Message1role) op('chat').par.Message1role Menu
Default:
user
Options:
user, assistant, system
Text (Msg 1) (Message1text) op('chat').par.Message1text String
Default:
None

Understanding Model Selection

Operators utilizing LLMs (LOPs) offer flexible ways to configure the AI model used:

  • ChatTD Model (Default): By default, LOPs inherit model settings (API Server and Model) from the central ChatTD component. You can configure ChatTD via the "Controls" section in the Operator Create Dialog or its parameter page.
  • Custom Model: Select this option in "Use Model From" to override the ChatTD settings and specify the API Server and AI Model directly within this operator.
  • Controller Model: Choose this to have the LOP inherit its API Server and AI Model parameters from another operator (like a different Agent or any LOP with model parameters) specified in the Controller [ Model ] parameter. This allows centralizing model control.

The Search toggle filters the AI Model dropdown based on keywords entered in Model Search. The Show Model Info toggle (if available) displays detailed information about the selected model directly in the operator's viewer, including cost and token limits.

Output Settings Header
Max Tokens (Maxtokens) op('chat').par.Maxtokens Int

The maximum number of tokens the model should generate.

Default:
256
Temperature (Temperature) op('chat').par.Temperature Float

Controls randomness in the response. Lower values are more deterministic.

Default:
0
Model Selection Header
Use Model From (Modelselection) op('chat').par.Modelselection Menu

Choose where the model configuration comes from.

Default:
chattd_model
Options:
chattd_model, custom_model, controller_model
Controller [ Model ] (Modelcontroller) op('chat').par.Modelcontroller OP

Operator providing model settings when 'Use Model From' is set to controller_model.

Default:
None
Select API Server (Apiserver) op('chat').par.Apiserver StrMenu

Select the LiteLLM provider (API server).

Default:
openrouter
Menu Options:
  • openrouter (openrouter)
  • openai (openai)
  • groq (groq)
  • gemini (gemini)
  • ollama (ollama)
  • lmstudio (lmstudio)
  • custom (custom)
AI Model (Model) op('chat').par.Model StrMenu

Specific model to request. Available options depend on the selected provider.

Default:
llama-3.2-11b-vision-preview
Menu Options:
  • llama-3.2-11b-vision-preview (llama-3.2-11b-vision-preview)
Search (Search) op('chat').par.Search Toggle

Enable dynamic model search based on a pattern.

Default:
off
Options:
off, on
Model Search (Modelsearch) op('chat').par.Modelsearch Str

Pattern to filter models when Search is enabled.

Default:
"" (Empty String)
Use System Message (Usesystemmessage) op('chat').par.Usesystemmessage Toggle
Default:
Off
System Message (Systemmessage) op('chat').par.Systemmessage String
Default:
"" (Empty String)
Use User Prompt (Useuserprompt) op('chat').par.Useuserprompt Toggle
Default:
Off
User Prompt (Userprompt) op('chat').par.Userprompt String
Default:
"" (Empty String)
Clear Conversation (Clearconversation) op('chat').par.Clearconversation Pulse
Default:
None
Load from Input (Loadfrominput) op('chat').par.Loadfrominput Pulse
Default:
None
Conversation ID (Conversationid) op('chat').par.Conversationid String
Default:
"" (Empty String)
Callbacks Header
Callback DAT (Callbackdat) op('chat').par.Callbackdat DAT
Default:
ChatTD_callbacks
Edit Callbacks (Editcallbacksscript) op('chat').par.Editcallbacksscript Pulse
Default:
None
Create Callbacks (Createpulse) op('chat').par.Createpulse Pulse
Default:
None
onTaskStart (Ontaskstart) op('chat').par.Ontaskstart Toggle
Default:
Off
onTaskComplete (Ontaskcomplete) op('chat').par.Ontaskcomplete Toggle
Default:
Off
Textport Debug Callbacks (Debugcallbacks) op('chat').par.Debugcallbacks Menu
Default:
Full Details
Options:
None, Errors Only, Basic Info, Full Details
Show Built In Pars (Showbuiltin) op('chat').par.Showbuiltin Toggle
Default:
Off
Bypass (Bypass) op('chat').par.Bypass Toggle
Default:
Off
Chattd (Chattd) op('chat').par.Chattd OP
Default:
/dot_lops/ChatTD
Version (Version) op('chat').par.Version String
Default:
1.0.0
Last Updated (Lastupdated) op('chat').par.Lastupdated String
Default:
2024-11-06
Creator (Creator) op('chat').par.Creator String
Default:
dotsimulate
Website (Website) op('chat').par.Website String
Default:
https://dotsimulate.com
Available Callbacks:
  • onTaskStart
  • onTaskComplete
Example Callback Structure:
def onTaskStart(info):
# Called when a call to the assistant or user begins
# info dictionary contains details like op, callType
pass

def onTaskComplete(info):
# Called when the AI response is received and processed
# info dictionary contains details like op, result, conversationID
pass
  • Long conversations (many message blocks) can increase processing time.
  • Higher temperature settings might slightly increase AI response time.
  • Ensure the linked ChatTD operator is configured correctly.

The Chat LOP is ideal for creating few-shot prompts, which provide the AI with examples to guide its responses. This is a powerful way to enforce a specific output format or persona.

  1. Add Message Blocks: On the Messages page, use the + button on the Message sequence parameter to add pairs of messages.

  2. Create Examples: For each pair, set up a user message and a corresponding assistant response. This teaches the AI how you want it to behave.

    • Message 0 Role: user
    • Message 0 Text: Translate 'hello' to French.
    • Message 1 Role: assistant
    • Message 1 Text: {'translation': 'bonjour'}
    • Message 2 Role: user
    • Message 2 Text: Translate 'goodbye' to Spanish.
    • Message 2 Role: assistant
    • Message 2 Text: {'translation': 'adios'}
  3. Connect to Agent: Wire the output of this Chat LOP into the first input of an Agent LOP.

When the Agent receives a new prompt (e.g., Translate 'cat' to German.), it will use the few-shot examples as context and is more likely to respond in the desired JSON format: {'translation': 'katze'}.

You can manage the conversation flow using the parameters on the Conversation page without writing any code.

  • Clearing the Conversation: Pulse the Clearconversation parameter to reset the message blocks to a single, empty user message.
  • Loading from a DAT:
    1. Create a Table DAT with role and message columns.
    2. Connect it to the Chat LOP’s input.
    3. Pulse Loadfrominput to populate the message sequence from the DAT’s contents.
  • Input Handling: The Inputhandling menu controls how messages from an input DAT are combined with the messages in the parameter sequence. This is useful for combining a static set of examples with dynamic input.

You can use the Chat LOP to generate content directly.

  1. Set up a sequence of messages that ends with a user role.
  2. Pulse the Callassistant parameter on the Messages page.
  3. The operator will call the AI model, and the response will be added as a new assistant message block, completing the conversation.