Skip to content

Handoff Operator

The Handoff operator acts as an intelligent router for conversations. It uses a configured Language Model (LLM) to analyze the ongoing discussion (provided via the input_table) and decide which specialized Agent operator, defined in its Agents sequence, is best suited to handle the next step.

This allows for dynamic and sophisticated workflows where different AI assistants can contribute based on their specific expertise or assigned roles. When handoff is disabled, it simply passes the conversation to a manually selected agent.

Handoff Operator UI

Parameters are organized by page.

Call Agents (Call) op('handoff').par.Call Pulse
Default:
None
Enable Handoff (Enablehandoff) op('handoff').par.Enablehandoff Toggle
Default:
On
Options:
Off, On
Current Agent (Currentagent) op('handoff').par.Currentagent Menu
Default:
None
Active (Active) op('handoff').par.Active Toggle
Default:
None
Options:
Off, On
Status (Status) op('handoff').par.Status Str
Default:
None
Reason (Reason) op('handoff').par.Reason Str
Default:
None
Agents (Agents) op('handoff').par.Agents Sequence
Default:
None
Display (Display) op('handoff').par.Display Menu
Default:
handoff_history
Options:
conversation_dat, handoff_history

This page configures the LLM used by the Handoff operator itself to make the routing decisions when Enable Handoff is active. These settings do not affect the models used by the individual Agent operators defined in the sequence.

Understanding Model Selection

Operators utilizing LLMs (LOPs) offer flexible ways to configure the AI model used:

  • ChatTD Model (Default): By default, LOPs inherit model settings (API Server and Model) from the central ChatTD component. You can configure ChatTD via the "Controls" section in the Operator Create Dialog or its parameter page.
  • Custom Model: Select this option in "Use Model From" to override the ChatTD settings and specify the API Server and AI Model directly within this operator.
  • Controller Model: Choose this to have the LOP inherit its API Server and AI Model parameters from another operator (like a different Agent or any LOP with model parameters) specified in the Controller [ Model ] parameter. This allows centralizing model control.

The Search toggle filters the AI Model dropdown based on keywords entered in Model Search. The Show Model Info toggle (if available) displays detailed information about the selected model directly in the operator's viewer, including cost and token limits.

Output Settings Header
Max Tokens (Maxtokens) op('handoff').par.Maxtokens Int

The maximum number of tokens the model should generate.

Default:
4096
Temperature (Temperature) op('handoff').par.Temperature Float

Controls randomness in the response. Lower values are more deterministic.

Default:
0.7
Model Selection Header
Use Model From (Modelselection) op('handoff').par.Modelselection Menu

Choose where the model configuration comes from.

Default:
custom_model
Options:
chattd_model, custom_model, controller_model
Controller [ Model ] (Modelcontroller) op('handoff').par.Modelcontroller OP

Operator providing model settings when 'Use Model From' is set to controller_model.

Default:
None
Select API Server (Apiserver) op('handoff').par.Apiserver StrMenu

Select the LiteLLM provider (API server).

Default:
openrouter
Menu Options:
  • openrouter (openrouter)
  • openai (openai)
  • groq (groq)
  • gemini (gemini)
  • ollama (ollama)
  • lmstudio (lmstudio)
  • custom (custom)
AI Model (Model) op('handoff').par.Model StrMenu

Specific model to request. Available options depend on the selected provider.

Default:
llama3-8b-8192
Menu Options:
  • llama3-8b-8192 (llama3-8b-8192)
Search (Search) op('handoff').par.Search Toggle

Enable dynamic model search based on a pattern.

Default:
off
Options:
off, on
Model Search (Modelsearch) op('handoff').par.Modelsearch Str

Pattern to filter models when Search is enabled.

Default:
"" (Empty String)
Bypass (Bypass) op('handoff').par.Bypass Toggle
Default:
Off
Options:
Off, On
Show Built-in Parameters (Showbuiltin) op('handoff').par.Showbuiltin Toggle
Default:
Off
Options:
Off, On
Version (Version) op('handoff').par.Version Str
Default:
None
Last Updated (Lastupdated) op('handoff').par.Lastupdated Str
Default:
None
Creator (Creator) op('handoff').par.Creator Str
Default:
None
Website (Website) op('handoff').par.Website Str
Default:
None
ChatTD Operator (Chattd) op('handoff').par.Chattd OP
Default:
None
  1. Add two or more Agent OPs to your network, each configured with a different System Prompt reflecting their specialty (e.g., one for creative writing, one for technical support).
  2. In the Handoff OP’s Agents sequence, add blocks linking to each Agent OP.
  3. Provide clear names in the Rename parameter for each agent (e.g., “Creative Writer”, “Support Bot”). Ensure Include System is On.
  4. Connect your input conversation DAT (e.g., from a Chat OP) to the Handoff OP’s input.
  5. Ensure Enable Handoff is On and the Model page is configured with an LLM capable of function calling (like GPT-4, Claude 3, Gemini).
  6. Pulse Call Agents. The Handoff OP will use its LLM to analyze the last message and the agent descriptions/system prompts, then route the conversation by calling the appropriate Agent.
  7. The Reason parameter will show the LLM’s decision rationale.
  1. Configure the Agents sequence as above.
  2. Turn Enable Handoff to Off.
  3. Select the desired target agent manually using the Current Agent menu.
  4. Pulse Call Agents. The conversation will be sent directly to the selected agent without LLM intervention.
  • The quality of the handoff decision heavily depends on the capability of the LLM selected on the Model page and the clarity of the Rename strings and included System Prompts for each agent in the sequence.
  • Ensure the LLM used for handoff supports function calling / tool use, as the routing mechanism relies on this.
  • The Handoff operator manages the conversation flow but relies on the individual Agent operators to generate the actual responses.
  • The conversation_dat viewer shows the state before the final agent response is added. The agent’s response is handled asynchronously via callbacks.