ChatTD
Overview
Section titled “Overview”ChatTD is the central system component of LOPs. Every operator in the ecosystem connects to it for API key management, model selection, Python package installation, and LLM API calls. It is automatically discovered by all LOPs operators and does not need to be wired to anything — it is always present at op.LOP.op('ChatTD') when LOPs is installed.
Key Features
Section titled “Key Features”- Multi-provider support: OpenAI, Anthropic, Google Gemini, Groq, OpenRouter, Ollama, LM Studio, Cohere, and custom OpenAI-compatible endpoints
- Centralized API key storage: Keys are stored per-provider in a local JSON file and persist across projects
- Dynamic model lists: Fetches available models from each provider’s API, with metadata enrichment from OpenRouter for pricing and context length
- Custom provider configuration: Add any OpenAI-compatible server (vLLM, LiteLLM proxy, llamafile) via the Config sequence on the Custom URL page
- Unified API routing: All LOP operators route their LLM calls through ChatTD’s
Customapicallmethod, which handles provider prefixes, model detection, streaming, tool calls, and response normalization - Generation defaults: Global temperature, max tokens, penalties, and system message that any operator can inherit
- Python environment management: Creates and manages a shared virtual environment with LiteLLM and other dependencies
- Embedded sub-managers: API Key Manager, Embedding Manager, and Sentiment Manager for specialized tasks
- API call tracing: Writes JSON trace logs for every LLM call to the
LOP_history/traces/folder inside the Python venv directory - Responses API support: Automatically detects GPT-5 and O-series reasoning models and routes them through OpenAI’s Responses API instead of Chat Completions
Requirements
Section titled “Requirements”- Python 3.11: Required for the virtual environment. Use the built-in installer dialog to download if needed.
- LiteLLM: The core Python dependency for multi-provider API routing. Installed automatically during setup.
First-Time Setup
Section titled “First-Time Setup”- Pulse Install / Modify LOPs on the Config page.
- In the dialog, choose Setup / Install to select a folder for the Python virtual environment.
- The installer creates a venv, installs LiteLLM, and sets the Python Venv Base Folder parameter.
- Once complete, the venv path is saved to your system’s AppData and will be detected automatically in future projects.
After installation, add API keys for the providers you want to use:
- Select a provider from the API Server menu on the Config page.
- Paste your API key into the API Key field. It will be stored securely and the field will show “API KEY STORED”.
- Pulse Get API Key to open the provider’s key management page in your browser if you need to create one.
Provider Configuration
Section titled “Provider Configuration”Built-in Providers
Section titled “Built-in Providers”Select any built-in provider from the API Server menu: OpenAI, Anthropic, Gemini, Groq, OpenRouter, Ollama, LM Studio, or Custom.
- Cloud providers (OpenAI, Anthropic, Gemini, Groq, OpenRouter, Cohere) require an API key.
- Local providers (Ollama, LM Studio) do not require a key and connect to
localhostby default. - The Custom option uses whatever URL is in the Custom URL field.
Adding Custom Providers
Section titled “Adding Custom Providers”The Custom URL page has a Config sequence for registering additional OpenAI-compatible servers.
- On the Custom URL page, add a block to the Config sequence.
- Enter a Provider name (this appears in the API Server menu).
- Enter the server Url (e.g.,
http://localhost:8000/v1). - Set the Prefix to match the provider format LiteLLM expects, or choose “None (No Prefix)” for direct proxies.
- The new provider will appear in the API Server dropdown immediately.
- If the server requires authentication, select the custom provider in API Server and paste the key into API Key — it will be stored just like built-in provider keys.
- Pulse Refresh Models to fetch the provider’s model list from its
/v1/modelsendpoint.
Model Lists
Section titled “Model Lists”Pulse Refresh Models to fetch available models from all providers that have valid API keys. Model lists are stored in internal DAT tables with metadata columns for context length, pricing, and capabilities.
ChatTD enriches model metadata by cross-referencing OpenRouter’s model database. Even if you use Anthropic or Gemini directly, pricing and context length information can be filled in from OpenRouter data.
Pulse Open Model Info to open documentation for the currently selected model in your browser.
Generation Defaults
Section titled “Generation Defaults”The Generation Defaults page provides global settings that any LOP operator can inherit when it does not override them locally:
- Temperature, Maxtokens, Seed, Stopphrase, Jsonmode
- Frequency Penalty and Presence Penalty (only applied when Set Penalty is enabled)
- Image Detail (auto, low, or high) for vision requests
- Use System Message and Systemmessage for a global system prompt
Individual operators override these defaults by passing explicit values in their API calls.
How Operators Use ChatTD
Section titled “How Operators Use ChatTD”LOPs operators do not call provider APIs directly. Instead, they call ChatTD’s Customapicall method, which:
- Resolves the provider and model from its own parameters or from values the calling operator passes
- Retrieves the API key from the Key Manager
- Adds the correct LiteLLM provider prefix to the model name
- Constructs the request with images, audio, tools, and all generation parameters
- Runs the call asynchronously through TDAsyncIO
- Handles streaming chunk delivery back to the calling operator’s callback
- Writes a JSON trace file for debugging and cost tracking
- Fires callbacks (onCustomGenerate, onCustomDone, onError) for monitoring
Sub-Managers
Section titled “Sub-Managers”API Key Manager
Section titled “API Key Manager”Stores and retrieves API keys per provider from a local JSON file. Keys persist across projects and TouchDesigner sessions. Pulse Clear All Stored Keys on the Config page to remove all saved keys.
Embedding Manager
Section titled “Embedding Manager”Provides embedding models for operators like RAG Index. Supports OpenAI embeddings, Ollama local embeddings (nomic-embed-text), and HuggingFace sentence-transformers. Models are cached after first load.
Sentiment Manager
Section titled “Sentiment Manager”Provides sentiment analysis backends (VADER, TextBlob, Transformers, Stanza) for operators that need text sentiment scoring. Dependencies are installed on demand when a backend is first used.
Callbacks
Section titled “Callbacks”ChatTD fires three operator-specific callbacks beyond the standard LOPs set:
- onCustomGenerate: Fires when any API call is initiated. Useful for logging or UI updates when a request starts.
- onCustomDone: Fires when an API call completes successfully. Contains cost and usage data for tracking spend.
- onError: Fires when an API call fails. Contains the error type, status code, and the model that was used.
Enable or disable each callback individually on the Callbacks page using the toggle next to its name.
Troubleshooting
Section titled “Troubleshooting””LiteLLM Unavailable” errors on every API call
Section titled “”LiteLLM Unavailable” errors on every API call”LiteLLM is not installed or not importable. Pulse Install / Modify LOPs on the Config page and choose “Check/Install Requirement” to reinstall it. If the problem persists, check the TouchDesigner textport for detailed import errors — a common cause is conflicting system Python packages. Go to Edit > Preferences > General and uncheck “Add Python Site-Packages”, then restart TouchDesigner.
API key not recognized after entering it
Section titled “API key not recognized after entering it”The key must meet a minimum length (10 characters for most providers, 30 for Gemini). If the field reverts to “NO KEY - ENTER HERE”, the key was too short or empty. Paste the full key and press Enter.
Model list is empty for a provider
Section titled “Model list is empty for a provider”Make sure the provider has a valid API key stored (cloud providers) or that the local server is running (Ollama, LM Studio). Then pulse Refresh Models. Check the textport for fetch errors.
Custom provider models not appearing
Section titled “Custom provider models not appearing”Verify the URL is correct and the server is running. The endpoint must respond to a GET request at /v1/models in OpenAI-compatible format. If the server requires authentication, make sure you have stored an API key for the custom provider (select it in API Server and paste the key into API Key). Check that the provider name in the Config sequence does not conflict with a built-in provider name.
Gemini models not loading
Section titled “Gemini models not loading”ChatTD uses the Google GenAI REST API to fetch Gemini models. If this fails, you may need to install the google-genai package. Pulse Install / Modify LOPs or use the Gemini-specific install dialog that appears automatically when the SDK is missing.
Verbose LiteLLM logging
Section titled “Verbose LiteLLM logging”Set Verbose LiteLLM on the Config page to INFO or DEBUG to see detailed request/response logging from LiteLLM in the textport. Set it to OFF to suppress all LiteLLM output.
Parameters
Section titled “Parameters”Config
Section titled “Config”op('chattd').par.Pythonvenv Folder - Default:
"" (Empty String)
op('chattd').par.Apikey Str - Default:
"" (Empty String)
op('chattd').par.Openbrowserapi Pulse - Default:
False
op('chattd').par.Refreshmodels Pulse Refreshes the list of available models from all configured providers.
- Default:
False
op('chattd').par.Search Toggle - Default:
False
op('chattd').par.Modelsearch Str - Default:
"" (Empty String)
op('chattd').par.Customurl Str - Default:
"" (Empty String)
op('chattd').par.Installlops Pulse - Default:
False
op('chattd').par.Unlinkpythonvenv Pulse - Default:
False
op('chattd').par.Clearkeys Pulse - Default:
False
op('chattd').par.Openmodelinfo Pulse Opens browser to documentation for the currently selected model.
- Default:
False
Generation Defaults
Section titled “Generation Defaults”op('chattd').par.Usesystemmessage Toggle - Default:
False
op('chattd').par.Systemmessage Str - Default:
"" (Empty String)
op('chattd').par.Temperature Float - Default:
0.0- Range:
- 0 to 2
- Slider Range:
- 0 to 1
op('chattd').par.Maxtokens Int - Default:
0- Range:
- 1 to 1
- Slider Range:
- 0 to 1
op('chattd').par.Seed Int - Default:
0- Range:
- 0 to 1
- Slider Range:
- 0 to 1
op('chattd').par.Stopphrase Str - Default:
"" (Empty String)
op('chattd').par.Jsonmode Toggle - Default:
False
op('chattd').par.Setpenalty Toggle - Default:
False
op('chattd').par.Frequencypenalty Float - Default:
0.0- Range:
- -2 to 2
- Slider Range:
- 0 to 1
op('chattd').par.Presencepenalty Float - Default:
0.0- Range:
- -2 to 2
- Slider Range:
- 0 to 1
Custom URL
Section titled “Custom URL”op('chattd').par.Config Sequence - Default:
0
op('chattd').par.Config0provider Str - Default:
"" (Empty String)
op('chattd').par.Config0url Str - Default:
"" (Empty String)
op('chattd').par.Config1provider Str - Default:
"" (Empty String)
op('chattd').par.Config1url Str - Default:
"" (Empty String)
op('chattd').par.Config2provider Str - Default:
"" (Empty String)
op('chattd').par.Config2url Str - Default:
"" (Empty String)
op('chattd').par.Config3provider Str - Default:
"" (Empty String)
op('chattd').par.Config3url Str - Default:
"" (Empty String)
op('chattd').par.Config4provider Str - Default:
"" (Empty String)
op('chattd').par.Config4url Str - Default:
"" (Empty String)
Callbacks
Section titled “Callbacks”op('chattd').par.Callbackdat DAT - Default:
ChatTD_callbacks
op('chattd').par.Editcallbacksscript Pulse - Default:
False
op('chattd').par.Createpulse Pulse - Default:
False
op('chattd').par.Onerror Toggle - Default:
True
op('chattd').par.Oncustomgenerate Toggle - Default:
False
op('chattd').par.Oncustomdone Toggle - Default:
False
Callbacks
Section titled “Callbacks”onCustomGenerateonCustomDoneonError
def onCustomGenerate(info):
# Called when an API call is initiated through ChatTD
# info contains: call_id, model_used, request_start_time, conversation
pass
def onCustomDone(info):
# Called when an API call completes successfully
# info contains: call_id, response_time, model, provider, cost, usage
pass
def onError(info):
# Called when an API call encounters an error
# info contains: error (dict with message, type, code, response_time, model, call_id, is_streaming)
pass Changelog
Section titled “Changelog”v2.1.02026-03-16
- Replace sys-python-manager dependency with op-python-manager in extends - python_manager is baked into ChatTD, not a separate system submodule
- Fix system message not surviving callback chain to follow-up calls - Remove custom_models_table from init, use models/custom_models DAT directly - Add API key auth header for custom provider model fetching - Add tool_choice removal for Responses API (both streaming and sync paths) - Clean sync_responses_wrapper to strip tool_choice before litellm.responses() - Call Verboselitellm() on init - Remove stale TODO comments and section delimiter formatting - Remove commented-out legacy code blocks
- Add trace_api_call parameter to Customapicall (default None for backwards compat) - Skip trace file creation when trace_api_call is False - Wrap all trace writes in conditional checks - Avoids deepcopy and file I/O overhead when tracing disabled
- Apiserver dropdown includes custom providers from Config sequence - Users can now select and store API keys for custom providers
- Fetch models from custom provider /models endpoint - Store custom models in shared custom_models table - get_models_with_metadata handles custom providers from Config sequence - get_dynamic_models_for_provider filters custom_models by provider name - Agent dropdown now populates with custom provider models - ResetOp clears custom_models table
- Add custom provider config with prefix and URL from Config sequence - Dynamic provider list includes custom providers in dropdown - Support 'none' prefix for direct model names (no LiteLLM routing prefix) - Add custom_llm_provider parameter for OpenAI-compatible format - Auto-set placeholder API key for local servers without auth
- Initial chattd structure with embedded managers
- Initial commit