Skip to content

Token Count

The Token Count LOP allows you to quickly estimate the number of tokens in a given text string. This is crucial when working with Large Language Models (LLMs) as most APIs charge based on token usage. By accurately predicting token counts, you can manage costs, optimize prompt lengths, and ensure your applications stay within budget. The operator supports different tokenizers to match various LLM models.

  • Python packages:
    • tiktoken (automatically installed by TouchDesigner if missing)
  • No additional setup required
  • in1: (Optional) DAT table containing conversation data, used when ‘Text Input Source’ is set to ‘input_dat’, ‘last_user’, ‘last_assistant’, ‘last_system’, or ‘all_conversation’. The table should have ‘role’ and ‘message’ columns.
  • in2: (Optional) Text DAT, used when ‘Text Input Source’ is set to ‘input_dat’.

None (the token count is displayed as a parameter value)

Token Count (Tokencount) op('token_count').par.Tokencount Int

Displays the calculated number of tokens in the input text. This value is updated when the 'Count' pulse parameter is triggered or when the input text changes and 'onChange' is enabled.

Default:
246
onChange (Onchange) op('token_count').par.Onchange Toggle

When enabled, automatically recalculates the token count whenever the input text changes.

Default:
1
Count (Count) op('token_count').par.Count Pulse

Manually triggers the token counting process. This is useful when 'onChange' is disabled or when you want to update the count on demand.

Default:
None
Tokenizer (Tokenizer) op('token_count').par.Tokenizer Menu

Specifies the tokenizer to use for counting tokens. The cl100k_base option is recommended for OpenAI models, while llama is recommended for LLaMA models.

Default:
cl100k_base
Options:
cl100k_base: GPT-3.5/4 (cl100k), llama: LLaMA
Text to Count (Text) op('token_count').par.Text String

The text string to be tokenized and counted. This parameter is used when 'Text Input Source' is set to 'Parameter'.

Default:
"" (Empty String)
Text Input Source (Textinput) op('token_count').par.Textinput Menu

Determines the source of the text to be counted. When using conversation options, expects a DAT with 'role' and 'message' columns.

Default:
all_conversation
Options:
parameter: Uses Text parameter, input_dat: Input DAT text, last_user: Last user message, last_assistant: Last assistant message, last_system: Last system message, all_conversation: All messages
Bypass (Bypass) op('token_count').par.Bypass Toggle

When enabled, bypasses the token counting operation.

Default:
0
Show Built-in Parameters (Showbuiltin) op('token_count').par.Showbuiltin Toggle

Toggles the visibility of built-in parameters.

Default:
0
Version (Version) op('token_count').par.Version String

Displays the version of the operator.

Default:
1.0.0
Last Updated (Lastupdated) op('token_count').par.Lastupdated String

Displays the last updated date.

Default:
2025-01-30
Creator (Creator) op('token_count').par.Creator String

Displays the creator of the operator.

Default:
dotsimulate
Website (Website) op('token_count').par.Website String

Displays the website of the creator.

Default:
https://dotsimulate.com
ChatTD Operator (Chattd) op('token_count').par.Chattd OP

References the ChatTD operator.

Default:
/dot_lops/ChatTD
  • The tiktoken library is required for tokenization. Ensure it’s installed in your TouchDesigner environment.
  • Using the ‘onChange’ parameter can impact performance if the input text changes frequently. Consider disabling it and using the ‘Count’ pulse for manual updates in such cases.
  • When using ‘all_conversation’ as the input source, the token count calculation may take longer for large conversation histories.
# Set input source to parameter
op('token_count').par.Textinput = 'parameter'
op('token_count').par.Text = 'Hello, how are you today?'
# Enable auto-update
op('token_count').par.Onchange = 1
# Or manually trigger count
op('token_count').par.Count.pulse()
# Get the token count
token_count = op('token_count').par.Tokencount.eval()
# Create and connect text DAT
text_dat = op('text_input')
text_dat.text = 'Text to analyze'
# Configure token counter
op('token_count').inputConnectors[1].connect(text_dat)
op('token_count').par.Textinput = 'input_dat'
op('token_count').par.Count.pulse()
# Assuming conversation_dat is a table with role, message columns
op('token_count').inputConnectors[0].connect(op('conversation_dat'))
op('token_count').par.Textinput = 'all_conversation'
op('token_count').par.Tokenizer = 'cl100k_base' # For OpenAI models
op('token_count').par.Count.pulse()
  • Estimating the cost of using LLM APIs
  • Limiting prompt lengths to fit within model context windows
  • Monitoring token usage in real-time applications
  • Optimizing prompts for better performance and cost efficiency
  • Analyzing conversation history for token usage patterns