Skip to content

ChatTD

v2.1.0New

ChatTD is the central system component of LOPs. Every operator in the ecosystem connects to it for API key management, model selection, Python package installation, and LLM API calls. It is automatically discovered by all LOPs operators and does not need to be wired to anything — it is always present at op.LOP.op('ChatTD') when LOPs is installed.

  • Multi-provider support: OpenAI, Anthropic, Google Gemini, Groq, OpenRouter, Ollama, LM Studio, Cohere, and custom OpenAI-compatible endpoints
  • Centralized API key storage: Keys are stored per-provider in a local JSON file and persist across projects
  • Dynamic model lists: Fetches available models from each provider’s API, with metadata enrichment from OpenRouter for pricing and context length
  • Custom provider configuration: Add any OpenAI-compatible server (vLLM, LiteLLM proxy, llamafile) via the Config sequence on the Custom URL page
  • Unified API routing: All LOP operators route their LLM calls through ChatTD’s Customapicall method, which handles provider prefixes, model detection, streaming, tool calls, and response normalization
  • Generation defaults: Global temperature, max tokens, penalties, and system message that any operator can inherit
  • Python environment management: Creates and manages a shared virtual environment with LiteLLM and other dependencies
  • Embedded sub-managers: API Key Manager, Embedding Manager, and Sentiment Manager for specialized tasks
  • API call tracing: Writes JSON trace logs for every LLM call to the LOP_history/traces/ folder inside the Python venv directory
  • Responses API support: Automatically detects GPT-5 and O-series reasoning models and routes them through OpenAI’s Responses API instead of Chat Completions
  • Python 3.11: Required for the virtual environment. Use the built-in installer dialog to download if needed.
  • LiteLLM: The core Python dependency for multi-provider API routing. Installed automatically during setup.
  1. Pulse Install / Modify LOPs on the Config page.
  2. In the dialog, choose Setup / Install to select a folder for the Python virtual environment.
  3. The installer creates a venv, installs LiteLLM, and sets the Python Venv Base Folder parameter.
  4. Once complete, the venv path is saved to your system’s AppData and will be detected automatically in future projects.

After installation, add API keys for the providers you want to use:

  1. Select a provider from the API Server menu on the Config page.
  2. Paste your API key into the API Key field. It will be stored securely and the field will show “API KEY STORED”.
  3. Pulse Get API Key to open the provider’s key management page in your browser if you need to create one.

Select any built-in provider from the API Server menu: OpenAI, Anthropic, Gemini, Groq, OpenRouter, Ollama, LM Studio, or Custom.

  • Cloud providers (OpenAI, Anthropic, Gemini, Groq, OpenRouter, Cohere) require an API key.
  • Local providers (Ollama, LM Studio) do not require a key and connect to localhost by default.
  • The Custom option uses whatever URL is in the Custom URL field.

The Custom URL page has a Config sequence for registering additional OpenAI-compatible servers.

  1. On the Custom URL page, add a block to the Config sequence.
  2. Enter a Provider name (this appears in the API Server menu).
  3. Enter the server Url (e.g., http://localhost:8000/v1).
  4. Set the Prefix to match the provider format LiteLLM expects, or choose “None (No Prefix)” for direct proxies.
  5. The new provider will appear in the API Server dropdown immediately.
  6. If the server requires authentication, select the custom provider in API Server and paste the key into API Key — it will be stored just like built-in provider keys.
  7. Pulse Refresh Models to fetch the provider’s model list from its /v1/models endpoint.

Pulse Refresh Models to fetch available models from all providers that have valid API keys. Model lists are stored in internal DAT tables with metadata columns for context length, pricing, and capabilities.

ChatTD enriches model metadata by cross-referencing OpenRouter’s model database. Even if you use Anthropic or Gemini directly, pricing and context length information can be filled in from OpenRouter data.

Pulse Open Model Info to open documentation for the currently selected model in your browser.

The Generation Defaults page provides global settings that any LOP operator can inherit when it does not override them locally:

  • Temperature, Maxtokens, Seed, Stopphrase, Jsonmode
  • Frequency Penalty and Presence Penalty (only applied when Set Penalty is enabled)
  • Image Detail (auto, low, or high) for vision requests
  • Use System Message and Systemmessage for a global system prompt

Individual operators override these defaults by passing explicit values in their API calls.

LOPs operators do not call provider APIs directly. Instead, they call ChatTD’s Customapicall method, which:

  1. Resolves the provider and model from its own parameters or from values the calling operator passes
  2. Retrieves the API key from the Key Manager
  3. Adds the correct LiteLLM provider prefix to the model name
  4. Constructs the request with images, audio, tools, and all generation parameters
  5. Runs the call asynchronously through TDAsyncIO
  6. Handles streaming chunk delivery back to the calling operator’s callback
  7. Writes a JSON trace file for debugging and cost tracking
  8. Fires callbacks (onCustomGenerate, onCustomDone, onError) for monitoring

Stores and retrieves API keys per provider from a local JSON file. Keys persist across projects and TouchDesigner sessions. Pulse Clear All Stored Keys on the Config page to remove all saved keys.

Provides embedding models for operators like RAG Index. Supports OpenAI embeddings, Ollama local embeddings (nomic-embed-text), and HuggingFace sentence-transformers. Models are cached after first load.

Provides sentiment analysis backends (VADER, TextBlob, Transformers, Stanza) for operators that need text sentiment scoring. Dependencies are installed on demand when a backend is first used.

ChatTD fires three operator-specific callbacks beyond the standard LOPs set:

  • onCustomGenerate: Fires when any API call is initiated. Useful for logging or UI updates when a request starts.
  • onCustomDone: Fires when an API call completes successfully. Contains cost and usage data for tracking spend.
  • onError: Fires when an API call fails. Contains the error type, status code, and the model that was used.

Enable or disable each callback individually on the Callbacks page using the toggle next to its name.

”LiteLLM Unavailable” errors on every API call

Section titled “”LiteLLM Unavailable” errors on every API call”

LiteLLM is not installed or not importable. Pulse Install / Modify LOPs on the Config page and choose “Check/Install Requirement” to reinstall it. If the problem persists, check the TouchDesigner textport for detailed import errors — a common cause is conflicting system Python packages. Go to Edit > Preferences > General and uncheck “Add Python Site-Packages”, then restart TouchDesigner.

The key must meet a minimum length (10 characters for most providers, 30 for Gemini). If the field reverts to “NO KEY - ENTER HERE”, the key was too short or empty. Paste the full key and press Enter.

Make sure the provider has a valid API key stored (cloud providers) or that the local server is running (Ollama, LM Studio). Then pulse Refresh Models. Check the textport for fetch errors.

Verify the URL is correct and the server is running. The endpoint must respond to a GET request at /v1/models in OpenAI-compatible format. If the server requires authentication, make sure you have stored an API key for the custom provider (select it in API Server and paste the key into API Key). Check that the provider name in the Config sequence does not conflict with a built-in provider name.

ChatTD uses the Google GenAI REST API to fetch Gemini models. If this fails, you may need to install the google-genai package. Pulse Install / Modify LOPs or use the Gemini-specific install dialog that appears automatically when the SDK is missing.

Set Verbose LiteLLM on the Config page to INFO or DEBUG to see detailed request/response logging from LiteLLM in the textport. Set it to OFF to suppress all LiteLLM output.

Python Venv Base Folder (Pythonvenv) op('chattd').par.Pythonvenv Folder
Default:
"" (Empty String)
API Key (Apikey) op('chattd').par.Apikey Str
Default:
"" (Empty String)
Get API Key (Openbrowserapi) op('chattd').par.Openbrowserapi Pulse
Default:
False
API Server (Apiserver) op('chattd').par.Apiserver StrMenu
Default:
"" (Empty String)
Menu Options:
  • openrouter (openrouter)
  • openai (openai)
  • groq (groq)
  • ollama (ollama)
  • gemini (gemini)
  • lmstudio (lmstudio)
  • custom (custom)
  • anthropic (anthropic)
  • llamafile (llamafile)
  • vllm (vllm)
  • my_server (my_server)
AI Model (Model) op('chattd').par.Model StrMenu
Default:
"" (Empty String)
Menu Options:
  • gpt-oss-120b (openai/gpt-oss-120b)
  • gpt-oss-20b (openai/gpt-oss-20b)
  • gpt-oss-safeguard-20b (openai/gpt-oss-safeguard-20b)
  • llama-3.1-8b-instant (llama-3.1-8b-instant)
  • llama-3.3-70b-versatile (llama-3.3-70b-versatile)
  • kimi-k2-instruct-0905 (moonshotai/kimi-k2-instruct-0905)
  • qwen3-32b (qwen/qwen3-32b)
  • allam-2-7b (allam-2-7b)
  • orpheus-arabic-saudi (canopylabs/orpheus-arabic-saudi)
  • orpheus-v1-english (canopylabs/orpheus-v1-english)
  • compound (groq/compound)
  • compound-mini (groq/compound-mini)
  • llama-4-scout-17b-16e-instruct (meta-llama/llama-4-scout-17b-16e-instruct)
  • llama-prompt-guard-2-22m (meta-llama/llama-prompt-guard-2-22m)
  • llama-prompt-guard-2-86m (meta-llama/llama-prompt-guard-2-86m)
Refresh Models (Refreshmodels) op('chattd').par.Refreshmodels Pulse

Refreshes the list of available models from all configured providers.

Default:
False
Search Models (Search) op('chattd').par.Search Toggle
Default:
False
Model Search (Modelsearch) op('chattd').par.Modelsearch Str
Default:
"" (Empty String)
Custom URL (Customurl) op('chattd').par.Customurl Str
Default:
"" (Empty String)
Install / Modify LOPs (Installlops) op('chattd').par.Installlops Pulse
Default:
False
Unlink Python Venv (Unlinkpythonvenv) op('chattd').par.Unlinkpythonvenv Pulse
Default:
False
Clear All Stored Keys (Clearkeys) op('chattd').par.Clearkeys Pulse
Default:
False
Verbose LiteLLM (Verboselitellm) op('chattd').par.Verboselitellm Menu
Default:
OFF
Options:
OFF, INFO, DEBUG
Open Model Info (Openmodelinfo) op('chattd').par.Openmodelinfo Pulse

Opens browser to documentation for the currently selected model.

Default:
False
Use System Message (Usesystemmessage) op('chattd').par.Usesystemmessage Toggle
Default:
False
Systemmessage (Systemmessage) op('chattd').par.Systemmessage Str
Default:
"" (Empty String)
Temperature (Temperature) op('chattd').par.Temperature Float
Default:
0.0
Range:
0 to 2
Slider Range:
0 to 1
Maxtokens (Maxtokens) op('chattd').par.Maxtokens Int
Default:
0
Range:
1 to 1
Slider Range:
0 to 1
Seed (Seed) op('chattd').par.Seed Int
Default:
0
Range:
0 to 1
Slider Range:
0 to 1
Stopphrase (Stopphrase) op('chattd').par.Stopphrase Str
Default:
"" (Empty String)
Jsonmode (Jsonmode) op('chattd').par.Jsonmode Toggle
Default:
False
Set Penalty (Setpenalty) op('chattd').par.Setpenalty Toggle
Default:
False
Frequencypenalty (Frequencypenalty) op('chattd').par.Frequencypenalty Float
Default:
0.0
Range:
-2 to 2
Slider Range:
0 to 1
Presencepenalty (Presencepenalty) op('chattd').par.Presencepenalty Float
Default:
0.0
Range:
-2 to 2
Slider Range:
0 to 1
Image Detail (Imagedetail) op('chattd').par.Imagedetail Menu
Default:
auto
Options:
auto, low, high
Config (Config) op('chattd').par.Config Sequence
Default:
0
Provider (Config0provider) op('chattd').par.Config0provider Str
Default:
"" (Empty String)
Url (Config0url) op('chattd').par.Config0url Str
Default:
"" (Empty String)
Prefix (Config0prefix) op('chattd').par.Config0prefix StrMenu
Default:
none
Menu Options:
  • None (No Prefix) (none)
  • OpenAI (openai/)
  • Anthropic (anthropic/)
  • Gemini (gemini/)
  • Groq (groq/)
  • Cohere (cohere/)
  • Mistral (mistral/)
  • Ollama (ollama_chat/)
  • LM Studio (lm_studio/)
  • OpenRouter (openrouter/)
Provider (Config1provider) op('chattd').par.Config1provider Str
Default:
"" (Empty String)
Url (Config1url) op('chattd').par.Config1url Str
Default:
"" (Empty String)
Prefix (Config1prefix) op('chattd').par.Config1prefix StrMenu
Default:
none
Menu Options:
  • None (No Prefix) (none)
  • OpenAI (openai/)
  • Anthropic (anthropic/)
  • Gemini (gemini/)
  • Groq (groq/)
  • Cohere (cohere/)
  • Mistral (mistral/)
  • Ollama (ollama_chat/)
  • LM Studio (lm_studio/)
  • OpenRouter (openrouter/)
Provider (Config2provider) op('chattd').par.Config2provider Str
Default:
"" (Empty String)
Url (Config2url) op('chattd').par.Config2url Str
Default:
"" (Empty String)
Prefix (Config2prefix) op('chattd').par.Config2prefix StrMenu
Default:
none
Menu Options:
  • None (No Prefix) (none)
  • OpenAI (openai/)
  • Anthropic (anthropic/)
  • Gemini (gemini/)
  • Groq (groq/)
  • Cohere (cohere/)
  • Mistral (mistral/)
  • Ollama (ollama_chat/)
  • LM Studio (lm_studio/)
  • OpenRouter (openrouter/)
Provider (Config3provider) op('chattd').par.Config3provider Str
Default:
"" (Empty String)
Url (Config3url) op('chattd').par.Config3url Str
Default:
"" (Empty String)
Prefix (Config3prefix) op('chattd').par.Config3prefix StrMenu
Default:
none
Menu Options:
  • None (No Prefix) (none)
  • OpenAI (openai/)
  • Anthropic (anthropic/)
  • Gemini (gemini/)
  • Groq (groq/)
  • Cohere (cohere/)
  • Mistral (mistral/)
  • Ollama (ollama_chat/)
  • LM Studio (lm_studio/)
  • OpenRouter (openrouter/)
Provider (Config4provider) op('chattd').par.Config4provider Str
Default:
"" (Empty String)
Url (Config4url) op('chattd').par.Config4url Str
Default:
"" (Empty String)
Prefix (Config4prefix) op('chattd').par.Config4prefix StrMenu
Default:
none
Menu Options:
  • None (No Prefix) (none)
  • OpenAI (openai/)
  • Anthropic (anthropic/)
  • Gemini (gemini/)
  • Groq (groq/)
  • Cohere (cohere/)
  • Mistral (mistral/)
  • Ollama (ollama_chat/)
  • LM Studio (lm_studio/)
  • OpenRouter (openrouter/)
Callbacks Header
Callback DAT (Callbackdat) op('chattd').par.Callbackdat DAT
Default:
ChatTD_callbacks
Edit Callbacks (Editcallbacksscript) op('chattd').par.Editcallbacksscript Pulse
Default:
False
Create Callbacks (Createpulse) op('chattd').par.Createpulse Pulse
Default:
False
onError (Onerror) op('chattd').par.Onerror Toggle
Default:
True
onCustomGenerate (Oncustomgenerate) op('chattd').par.Oncustomgenerate Toggle
Default:
False
onCustomDone (Oncustomdone) op('chattd').par.Oncustomdone Toggle
Default:
False
Textport Debug Callbacks (Debugcallbacks) op('chattd').par.Debugcallbacks Menu
Default:
Full Details
Options:
None, Errors Only, Basic Info, Full Details
Available Callbacks:
  • onCustomGenerate
  • onCustomDone
  • onError
Example Callback Structure:
def onCustomGenerate(info):
# Called when an API call is initiated through ChatTD
# info contains: call_id, model_used, request_start_time, conversation
pass

def onCustomDone(info):
# Called when an API call completes successfully
# info contains: call_id, response_time, model, provider, cost, usage
pass

def onError(info):
# Called when an API call encounters an error
# info contains: error (dict with message, type, code, response_time, model, call_id, is_streaming)
pass
v2.1.02026-03-16
  • Replace sys-python-manager dependency with op-python-manager in extends - python_manager is baked into ChatTD, not a separate system submodule
  • Fix system message not surviving callback chain to follow-up calls - Remove custom_models_table from init, use models/custom_models DAT directly - Add API key auth header for custom provider model fetching - Add tool_choice removal for Responses API (both streaming and sync paths) - Clean sync_responses_wrapper to strip tool_choice before litellm.responses() - Call Verboselitellm() on init - Remove stale TODO comments and section delimiter formatting - Remove commented-out legacy code blocks
  • Add trace_api_call parameter to Customapicall (default None for backwards compat) - Skip trace file creation when trace_api_call is False - Wrap all trace writes in conditional checks - Avoids deepcopy and file I/O overhead when tracing disabled
  • Apiserver dropdown includes custom providers from Config sequence - Users can now select and store API keys for custom providers
  • Fetch models from custom provider /models endpoint - Store custom models in shared custom_models table - get_models_with_metadata handles custom providers from Config sequence - get_dynamic_models_for_provider filters custom_models by provider name - Agent dropdown now populates with custom provider models - ResetOp clears custom_models table
  • Add custom provider config with prefix and URL from Config sequence - Dynamic provider list includes custom providers in dropdown - Support 'none' prefix for direct model names (no LiteLLM routing prefix) - Add custom_llm_provider parameter for OpenAI-compatible format - Auto-set placeholder API key for local servers without auth
  • Initial chattd structure with embedded managers
  • Initial commit