Skip to content

Token Count

The Token Count LOP allows you to quickly estimate the number of tokens in a given text string. This is crucial when working with Large Language Models (LLMs) as most APIs charge based on token usage. By accurately predicting token counts, you can manage costs, optimize prompt lengths, and ensure your applications stay within budget. The operator supports different tokenizers to match various LLM models.

  • Python packages:
    • tiktoken (automatically installed by TouchDesigner if missing)
  • No additional setup required
  • in1: (Optional) DAT table containing conversation data, used when ‘Text Input Source’ is set to ‘input_dat’, ‘last_user’, ‘last_assistant’, ‘last_system’, or ‘all_conversation’. The table should have ‘role’ and ‘message’ columns.
  • in2: (Optional) Text DAT, used when ‘Text Input Source’ is set to ‘input_dat’.

None (the token count is displayed as a parameter value)

Token Count (Tokencount) op('token_count').par.Tokencount Int

Displays the calculated number of tokens in the input text. This value is updated when the 'Count' pulse parameter is triggered or when the input text changes and 'onChange' is enabled.

Default:
None
onChange (Onchange) op('token_count').par.Onchange Toggle

When enabled, automatically recalculates the token count whenever the input text changes.

Default:
Off
Count (Count) op('token_count').par.Count Pulse

Manually triggers the token counting process. This is useful when 'onChange' is disabled or when you want to update the count on demand.

Default:
None
Text Input Source (Textinput) op('token_count').par.Textinput Menu

Determines the source of the text to be counted. When using conversation options, expects a DAT with 'role' and 'message' columns.

Default:
all_conversation
Options:
all_conversation, input_dat, last_user, last_assistant, last_system, parameter
Text to Count (Text) op('token_count').par.Text String

The text string to be tokenized and counted. This parameter is used when 'Text Input Source' is set to 'Parameter'.

Default:
"" (Empty String)
Tokenizer (Tokenizer) op('token_count').par.Tokenizer Menu

Specifies the tokenizer to use for counting tokens. The cl100k_base option is recommended for OpenAI models, while llama is recommended for LLaMA models.

Default:
cl100k_base
Options:
cl100k_base, llama3
Bypass (Bypass) op('token_count').par.Bypass Toggle

When enabled, bypasses the token counting operation.

Default:
Off
Show Built-in Parameters (Showbuiltin) op('token_count').par.Showbuiltin Toggle

Toggles the visibility of built-in parameters.

Default:
Off
Version (Version) op('token_count').par.Version String

Displays the version of the operator.

Default:
None
Last Updated (Lastupdated) op('token_count').par.Lastupdated String

Displays the last updated date.

Default:
None
Creator (Creator) op('token_count').par.Creator String

Displays the creator of the operator.

Default:
None
Website (Website) op('token_count').par.Website String

Displays the website of the creator.

Default:
None
ChatTD Operator (Chattd) op('token_count').par.Chattd OP

References the ChatTD operator.

Default:
None
  • The tiktoken library is required for tokenization. Ensure it’s installed in your TouchDesigner environment.
  • Using the ‘onChange’ parameter can impact performance if the input text changes frequently. Consider disabling it and using the ‘Count’ pulse for manual updates in such cases.
  • When using ‘all_conversation’ as the input source, the token count calculation may take longer for large conversation histories.
  1. Create a token_count LOP.
  2. Set the Text Input Source to Parameter.
  3. Enter your text in the Text to Count field.
  4. Enable the onChange toggle to automatically update the token count, or pulse the Count Tokens parameter to manually trigger the count.
  5. The token count will be displayed in the Token Count parameter.
  1. Create a token_count LOP.
  2. Create a Text DAT and enter your text.
  3. Connect the Text DAT to the second input of the token_count LOP.
  4. Set the Text Input Source to Input DAT [ in2 ].
  5. Pulse the Count Tokens parameter.
  1. Create a token_count LOP.
  2. Create a Table DAT with a conversation and connect it to the first input of the token_count LOP.
  3. Set the Text Input Source to Full Conversation.
  4. Select the appropriate tokenizer for your model.
  5. Pulse the Count Tokens parameter.
  • Estimating the cost of using LLM APIs
  • Limiting prompt lengths to fit within model context windows
  • Monitoring token usage in real-time applications
  • Optimizing prompts for better performance and cost efficiency
  • Analyzing conversation history for token usage patterns