Skip to content
  1. OPERATORS
  2. CONTROLLERS

Agent

v2.0.0updated

The Agent is the primary operator for sending prompts to large language models and receiving responses. It manages the full lifecycle of an LLM interaction: assembling conversations from input data, injecting system messages and context, sending API requests (with optional streaming), executing tool calls when the model requests them, and delivering final responses through multiple output formats.

  • Multi-provider LLM access through LiteLLM (OpenRouter, OpenAI, Groq, Ollama, Gemini, Anthropic, LM Studio, and custom endpoints)
  • Streaming responses with real-time table updates
  • Tool calling with multi-turn budget control and parallel execution
  • Structured output with JSON schema validation
  • Vision support via direct TOP image input or Context Grabber
  • Audio file input for multimodal models
  • Thinking tag filtering for reasoning models
  • Prompt caching for cost reduction on supported providers
  • Reasoning effort control for compatible models
  • CHOP channel outputs for monitoring agent state, tool metrics, and callback events
Tool Caller caller

The Agent does not expose tools itself. Instead, it discovers and calls tools from other LOPs operators connected to its Tool sequence.

The Agent acts as the tool caller, not the tool provider. Any LOP with a GetTool() method can be wired into the Agent’s External Op Tools sequence on the Tools page. The Agent collects all available tools at call time, sends their schemas to the model, and routes tool calls back to the owning operator for execution.

Both single-tool operators (like Tool DAT or Search) and multi-tool providers (like MCP Client, which can expose dozens of tools from a single connection) are supported.

  • Input 1 (DAT): A conversation table with columns role, message, id, timestamp. Each row represents a message in the conversation. The agent reads this table when Call Agent is pulsed. When Call on in1 Table Change is enabled, the agent automatically triggers whenever this input table updates.

The Agent has 4 outputs:

  • Output 1: The conversation_dat table — the full conversation including the assistant’s response appended after each call
  • Output 2: The output_dat text DAT — the latest assistant response text (driven by the internal turn table formatter)
  • Output 3: The history_table — a running log of every API call with model, tokens, timing, and response metadata
  • Output 4: The turn_table — per-turn data collection showing the sequence of streaming chunks, tool calls, tool results, and responses within a single agent turn
  1. Place an Agent LOP in your network.
  2. Create a Table DAT with columns role, message, id, timestamp.
  3. Add a row with role user and your prompt in the message column.
  4. Wire the Table DAT into the Agent’s first input.
  5. On the Agent page, pulse Call Agent.
  6. The response appears on output 1 (conversation table) and output 2 (response text).

The Agent supports three model selection modes, configured with Use Model From on the Model page:

  • ChatTD (default): Uses the global model and API server configured in ChatTD. This is the simplest option and shares settings across all operators.
  • Custom Model: Select a specific API Server and AI Model directly on the Agent’s Model page. Use the Search toggle and Model Search field to filter long model lists.
  • Controller: Point the Controller parameter to another operator that provides model selection (useful for centralized model management across multiple agents).
  1. On the Agent page, enable Use Streaming.
  2. Optionally enable Update Table When Streaming to see the conversation table update in real time as chunks arrive.
  3. Pulse Call Agent. The response text on output 2 updates progressively as the model generates tokens.
  1. On the Tools page, enable Use LOP Tools.
  2. In the External Op Tools sequence, add a row and drag a tool-providing operator (such as a Tool DAT, Search LOP, or MCP Client) into the OP field.
  3. Set the tool’s Active state:
    • enabled: The model can choose to use the tool when appropriate.
    • forced: The model must use this tool on the next call.
    • disabled: The tool is ignored for this call.
  4. Set Tool Turn Budget to control how many rounds of tool calling are allowed before the agent must produce a final response. A budget of 1 means one round of tools followed by a response. Higher budgets allow the model to call tools, see results, and call more tools iteratively.
  5. Enable Tool Follow-up Response (on by default) so the agent makes a follow-up API call after tool execution to produce a natural language summary. When disabled, the agent stops after executing tools without generating a response.
  6. Enable Parallel Tool Calls if you want the model to request multiple tools simultaneously (when the provider supports it). Tools execute concurrently for faster results.

On the Context page:

  • Context Op: Wire a Context Grabber operator to inject additional context (text, images, files) into the conversation before sending to the model.
  • Send TOP Image: Enable this and set the TOP Image parameter to directly send a TouchDesigner TOP as an image with your prompt. The model must support vision.
  • Use Audio: Enable and set Audio File to send an audio file alongside the prompt for multimodal models that accept audio input.

On the Agent page, I/O Preset controls which local inputs become the next user message:

  • All Inputs: Reads the Prompt parameter, input DAT text, and an enclosing Annotate when present.
  • Prompt Parameter: Reads only the Agent’s Prompt parameter.
  • Annotate: Reads the smallest enclosing Annotate and can mirror responses back into that Annotate.
  1. On the I/O page, enable Structured Output.
  2. Create a Text DAT containing a valid JSON schema and wire it to the Schema DAT parameter.
  3. The Agent will instruct the model to return responses conforming to your schema, using strict mode with the OpenAI response format specification.

This is useful for extracting structured data from LLM responses — for example, parsing sentiment scores, extracting entities, or generating configuration objects.

Some reasoning models wrap their internal thought process in tags like <think>...</think>. On the I/O page:

  1. Set Thinking Filter Mode to control where filtering applies:
    • Filter Conversation & Display: Removes thinking tags from both the stored conversation and the display output.
    • Filter Conversation Only: Removes from the conversation history but keeps in display.
    • Filter Display (out2): Keeps in conversation but removes from the display output.
  2. Customize Thinking Phrases if your model uses different delimiters (comma-separated start and end tags).
  3. Optionally set Thinking Replacement Text to substitute filtered content.

On the I/O page, the Output Mode parameter controls how the agent delivers its response:

  • conversation: Standard mode. The response is appended to the conversation table on output 1.
  • table: Response is formatted into a table structure.
  • parameter: Response is written to a parameter.
  • custom: For advanced use cases with custom response handling.

The Assign Perspective setting on the I/O page controls how input message roles are interpreted:

  • user (default): Input roles are passed through as-is.
  • assistant: Swaps user/assistant roles, useful when the agent should continue from the assistant’s perspective.
  • third_party: Concatenates all input messages into a single user message.

The Agent exposes its internal state as CHOP channels through an internal Script CHOP. These channels include callback events (on_task_start, on_task_complete, on_tool_call, on_task_error), agent state (agent_active, agent_streaming, task_idle, task_responding, etc.), tool metrics (tool_turns_used, tool_turn_budget, total_available_tools), and token counts (prompt_tokens, completion_tokens, total_tokens). Connect downstream CHOPs to monitor agent activity in real time.

Beyond the standard onTaskStart and onTaskComplete, the Agent fires two additional callbacks:

Fires when the model requests tool execution, before the tools actually run. The info dict contains a tool_calls list with each tool’s id, name, and arguments. Use this to log, filter, or intercept tool calls before execution.

Fires when an API call fails or tool execution encounters an error. The info dict contains an error object with type, message, code, and model fields. The Agent automatically formats common LiteLLM errors (authentication failures, rate limits, context window exceeded, service unavailable) into user-friendly messages.

  • Start with ChatTD model selection for quick setup, then switch to Custom Model when you need per-agent model control.
  • Set Max Tokens appropriately on the Model page. The default of 256 is conservative — increase it for longer responses.
  • Use Tool Turn Budget wisely. A budget of 1 is sufficient for most single-tool workflows. Increase to 3-5 for complex agentic tasks where the model needs to gather information iteratively.
  • Enable Prompt Caching on the I/O page when making repeated calls with similar conversation history. This reduces costs significantly on providers like Anthropic.
  • Use the Chain ID parameter when integrating with orchestration systems. Setting a consistent Chain ID groups related API calls together for tracing and analytics.
  • Monitor with CHOP channels rather than polling parameters. The CHOP output provides reactive state updates at 60fps.
  • “Duplicate tool name detected”: Two operators in the Tool sequence expose tools with the same name. Remove one operator or reconfigure tool names to be unique.
  • Tool calls not executing: Verify Use LOP Tools is enabled on the Tools page and that tool operators are wired into the External Op Tools sequence with their Active state set to enabled or forced.
  • Empty responses: Check that Max Tokens on the Model page is set high enough. Very low values can cause truncated or empty responses.
  • Rate limit errors: The Agent surfaces provider-specific rate limit messages. Wait a moment and retry, or switch to a different provider/model.
  • Model not supporting images: If you see “content must be a string” errors, the selected model does not support multimodal input. Switch to a vision-capable model.
  • Streaming interruptions: Mid-stream errors are automatically reported. Check your network connection and the provider’s service status.
  • Tool budget exhausted: If the model keeps requesting tools but the budget is used up, increase Tool Turn Budget or simplify the task so fewer tool rounds are needed.
Agentstatus (Agentstatus) op('agent').par.Agentstatus Str
Default:
"" (Empty String)
Active (Active) op('agent').par.Active Toggle
Default:
False
Prompt (Prompt) op('agent').par.Prompt Str
Default:
"" (Empty String)
Call Agent (Call) op('agent').par.Call Pulse
Default:
False
Cancel Current (Cancelcall) op('agent').par.Cancelcall Pulse
Default:
False
I/O Preset (Iopreset) op('agent').par.Iopreset Menu
Default:
full
Options:
full, prompt, annotate, custom
Input Header
Context OP (Contextop) op('agent').par.Contextop OP
Default:
"" (Empty String)
Use Prompt (Useprompt) op('agent').par.Useprompt Toggle
Default:
True
Use Inputs (Useinputs) op('agent').par.Useinputs Toggle
Default:
True
Use Annotate (Useannotate) op('agent').par.Useannotate Toggle
Default:
True
Send TOP Image (Sendtopimage) op('agent').par.Sendtopimage Toggle
Default:
False
TOP Image (Topimage) op('agent').par.Topimage TOP
Default:
"" (Empty String)
Output Header
Output to Annotate (Outputtoannotate) op('agent').par.Outputtoannotate Toggle
Default:
False
Structured Output (Structuredoutput) op('agent').par.Structuredoutput Toggle
Default:
False
Schema DAT (Schemadat) op('agent').par.Schemadat DAT
Default:
"" (Empty String)
Schema Name (Schemaname) op('agent').par.Schemaname Str
Default:
"" (Empty String)
Thinking Filter (Thinkingfilter) op('agent').par.Thinkingfilter Menu
Default:
none
Options:
none, filter_both, filter_convo_only, filter_text_only
Think Tags (CSV) (Thinkingphrases) op('agent').par.Thinkingphrases Str
Default:
"" (Empty String)
Session Header
Session Mode (Sessionmode) op('agent').par.Sessionmode Menu
Default:
persistent
Options:
auto, persistent, named
Session ID (Sessionid) op('agent').par.Sessionid Str
Default:
"" (Empty String)
Clear Session (Clearsession) op('agent').par.Clearsession Pulse
Default:
False
Save Session (Savesession) op('agent').par.Savesession Pulse
Default:
False
Load Session (Loadsession) op('agent').par.Loadsession File
Default:
"" (Empty String)
Tracking Header
Chain ID (Chainid) op('agent').par.Chainid Str
Default:
"" (Empty String)
Trace API Call (Traceapicall) op('agent').par.Traceapicall Toggle
Default:
True
Model Selection (Modelselection) op('agent').par.Modelselection Menu
Default:
custom_model
Options:
custom_model, chattd_model, controller_model
API Server (Apiserver) op('agent').par.Apiserver Menu
Default:
openrouter
Options:
openrouter, openai, groq, ollama, gemini, lmstudio, custom, anthropic, llamafile, vllm, my_server
AI Model (Model) op('agent').par.Model StrMenu
Default:
"" (Empty String)
Menu Options:
  • gpt-oss-120b (openai/gpt-oss-120b)
  • gpt-oss-20b (openai/gpt-oss-20b)
  • gpt-oss-safeguard-20b (openai/gpt-oss-safeguard-20b)
  • llama-3.1-8b-instant (llama-3.1-8b-instant)
  • llama-3.3-70b-versatile (llama-3.3-70b-versatile)
  • qwen3-32b (qwen/qwen3-32b)
  • allam-2-7b (allam-2-7b)
  • orpheus-arabic-saudi (canopylabs/orpheus-arabic-saudi)
  • orpheus-v1-english (canopylabs/orpheus-v1-english)
  • compound (groq/compound)
  • compound-mini (groq/compound-mini)
  • llama-4-scout-17b-16e-instruct (meta-llama/llama-4-scout-17b-16e-instruct)
  • llama-prompt-guard-2-22m (meta-llama/llama-prompt-guard-2-22m)
  • llama-prompt-guard-2-86m (meta-llama/llama-prompt-guard-2-86m)
Model Controller (Modelcontroller) op('agent').par.Modelcontroller OP
Default:
"" (Empty String)
Search Models (Search) op('agent').par.Search Toggle
Default:
False
Model Search (Modelsearch) op('agent').par.Modelsearch Str
Default:
"" (Empty String)
System Message Header
Use System Message (Usesystemmessage) op('agent').par.Usesystemmessage Toggle
Default:
True
System Message DAT (Systemmessagedat) op('agent').par.Systemmessagedat DAT
Default:
./system_prompt
Displaysysmess (Displaysysmess) op('agent').par.Displaysysmess Str
Default:
"" (Empty String)
Edit System Message (Editsysmess) op('agent').par.Editsysmess Pulse
Default:
False
Request Settings Header
Streaming (Streaming) op('agent').par.Streaming Toggle
Default:
True
Limit Tokens (Limittokens) op('agent').par.Limittokens Toggle
Default:
False
Max Tokens (Maxtokens) op('agent').par.Maxtokens Int
Default:
4096
Range:
1 to 128000
Slider Range:
1 to 128000
Temperature (Temperature) op('agent').par.Temperature Float
Default:
0.7
Range:
0 to 2
Slider Range:
0 to 2
Reasoning Effort (Reasoningeffort) op('agent').par.Reasoningeffort Menu
Default:
off
Options:
off, low, medium, high
Prompt Caching (Enablepromptcaching) op('agent').par.Enablepromptcaching Toggle
Default:
False
Parallel Tool Calls (Paralleltoolcalls) op('agent').par.Paralleltoolcalls Toggle
Default:
False
Max Messages (Maxmessages) op('agent').par.Maxmessages Int

Trim oldest messages to stay under limit (0 = unlimited)

Default:
0
Range:
0 to 500
Slider Range:
0 to 500
Max Tool Result Chars (Maxresultchars) op('agent').par.Maxresultchars Int

Truncate tool results exceeding this size (0 = unlimited)

Default:
16000
Range:
0 to 100000
Slider Range:
0 to 100000
Tools: 14 tools active Header
Use Tools (Usetools) op('agent').par.Usetools Toggle
Default:
True
Tool Result Status (Toolresultstatus) op('agent').par.Toolresultstatus Str
Default:
"" (Empty String)
Tool Turn Budget (Toolturnbudget) op('agent').par.Toolturnbudget Int

Max tool turns per call

Default:
10
Range:
1 to 25
Slider Range:
1 to 25
Tool Follow-up Response (Toolfollowup) op('agent').par.Toolfollowup Toggle

Generate response after tool execution

Default:
True
Expose as Tool Header
Expose This Agent as a Tool (Enablegettool) op('agent').par.Enablegettool Toggle
Default:
False
Tool Name (Toolname) op('agent').par.Toolname Str
Default:
"" (Empty String)
Tool Description (Tooldescription) op('agent').par.Tooldescription Str
Default:
Agent sub-task executor
Tools (Tool) op('agent').par.Tool Sequence
Default:
0
Tool OP (Tool0toolop) op('agent').par.Tool0toolop OP

test help 1

Default:
"" (Empty String)
Active (Tool0toolactive) op('agent').par.Tool0toolactive Menu
Default:
on
Options:
off, on
Tool Approval Header
Approval Mode (Toolapproval) op('agent').par.Toolapproval Menu

Gate tool execution behind user approval

Default:
off
Options:
off, all, destructive, unknown
Pending (Pendingtools) op('agent').par.Pendingtools Str
Default:
"" (Empty String)
Approve (Approvetools) op('agent').par.Approvetools Pulse
Default:
False
Deny (Denytools) op('agent').par.Denytools Pulse
Default:
False
Approval Timeout (s) (Approvaltimeout) op('agent').par.Approvaltimeout Int

Auto-deny after N seconds (0 = wait forever)

Default:
0
Range:
0 to 600
Slider Range:
0 to 600
Cost Budget ($) (Costbudget) op('agent').par.Costbudget Float

Session cost limit in USD (0 = unlimited)

Default:
1.0
Range:
0 to 10
Slider Range:
0 to 10
Reset Cost Meter (Resetcostmeter) op('agent').par.Resetcostmeter Pulse
Default:
False
Force Final Response (Forcefinalresponse) op('agent').par.Forcefinalresponse Toggle

When tool budget exhausted with no text, send one more call without tools

Default:
True
Budget Exhausted Prompt (Budgetexhaustedprompt) op('agent').par.Budgetexhaustedprompt Str

Injected as user message on final forced call (leave empty to just strip tools)

Default:
Tool turn budget reached. Summarize your findings so far.
Skillscount (Skillscount) op('agent').par.Skillscount Str
Default:
"" (Empty String)
Use Skills (Useskills) op('agent').par.Useskills Toggle
Default:
False
Skills Source (Skillssource) op('agent').par.Skillssource Menu
Default:
none
Options:
none, project, user_project
Skills Folder (Skillsfolder) op('agent').par.Skillsfolder Folder
Default:
"" (Empty String)
Skills COMP (Skillscomp) op('agent').par.Skillscomp OP
Default:
"" (Empty String)
Scan Skills (Scanskills) op('agent').par.Scanskills Pulse
Default:
False
Profilescount (Profilescount) op('agent').par.Profilescount Str
Default:
"" (Empty String)
Use Profiles (Useprofiles) op('agent').par.Useprofiles Toggle
Default:
False
Profiles Source (Profilesource) op('agent').par.Profilesource Menu
Default:
none
Options:
none, project, user_project
Profiles Folder (Profilesfolder) op('agent').par.Profilesfolder Folder
Default:
"" (Empty String)
Scan Profiles (Scanprofiles) op('agent').par.Scanprofiles Pulse
Default:
False
Apply Profile Stack (Applyprofiles) op('agent').par.Applyprofiles Pulse
Default:
False
Profile Stack (Profile) op('agent').par.Profile Sequence
Default:
0
Profile (Profilename) op('agent').par.Profilename StrMenu
Default:
"" (Empty String)
Menu Options:
  • (none) ((none))
Display Name (Displayname) op('agent').par.Displayname Str

Friendly name for UI, dashboards, event sinks, and agent swarm traces. Profiles may set this value.

Default:
"" (Empty String)
Display Color (Displaycolorr) op('agent').par.Displaycolorr RGB

Identity color for the operator tile, compact panels, dashboards, and profile-driven UI.

Default:
0.98
Range:
0 to 1
Slider Range:
0 to 1
Display Color (Displaycolorg) op('agent').par.Displaycolorg RGB

Identity color for the operator tile, compact panels, dashboards, and profile-driven UI.

Default:
0.52
Range:
0 to 1
Slider Range:
0 to 1
Display Color (Displaycolorb) op('agent').par.Displaycolorb RGB

Identity color for the operator tile, compact panels, dashboards, and profile-driven UI.

Default:
0.02
Range:
0 to 1
Slider Range:
0 to 1
UI Behavior (Uibehavior) op('agent').par.Uibehavior Menu

Controls compact UI animation intensity. Profiles may set this value.

Default:
expressive
Options:
off, simple, expressive
UI Start Mode (Uistartmode) op('agent').par.Uistartmode Menu

Initial PanelKit Agent surface. Prompt Focus opens the prompt field and requests text cursor focus.

Default:
prompt_focus
Options:
prompt_focus, face
Expose Chat (Exposechat) op('agent').par.Exposechat Toggle

When On, LOP Studio IO discovers this agent for dashboard chat.

Default:
False
Callbacks Header
Callback DAT (Callbackdat) op('agent').par.Callbackdat DAT
Default:
"" (Empty String)
Print Callbacks (Printcallbacks) op('agent').par.Printcallbacks Toggle
Default:
False
Create Callbacks (Createcallbacks) op('agent').par.Createcallbacks Pulse
Default:
False
Event Toggles Header
On Task Start (Ontaskstart) op('agent').par.Ontaskstart Toggle
Default:
True
On Task Complete (Ontaskcomplete) op('agent').par.Ontaskcomplete Toggle
Default:
True
On Task Error (Ontaskerror) op('agent').par.Ontaskerror Toggle
Default:
True
On Tool Call (Ontoolcall) op('agent').par.Ontoolcall Toggle
Default:
True
On Tool Approval (Ontoolapproval) op('agent').par.Ontoolapproval Toggle
Default:
True
Session Events Header
On Session Clear (Onsessionclear) op('agent').par.Onsessionclear Toggle
Default:
True
On Session Save (Onsessionsave) op('agent').par.Onsessionsave Toggle
Default:
True
On Session Load (Onsessionload) op('agent').par.Onsessionload Toggle
Default:
True
On Cost Budget (Oncostbudget) op('agent').par.Oncostbudget Toggle
Default:
True
v2.0.02026-05-02
  • added GetState/GetTranscript public API for dashboard/swarm consumption - added event sink system (RegisterEventSink, lifecycle events) - added ghost callback guard and stale Active par reset on reinit - added display name, display color, UI behavior, UI start mode, Exposechat profile pars - added Useskills/Skillssource and Useprofiles/Profilesource tier controls with enableExpr - added Toolresultstatus readout for tool error/warning tracking - added _prepare_for_release cleanup method - removed Annotateops/Maxannotateops (speculative annotate context) - renamed I/O preset labels to All Inputs / Prompt Parameter - updated compose.json refs (rank_fusion to search_merge, agent_session removal) - updated category to Controllers
  • Deferred tool-result bridge for agent-as-tool support - Sub-agents return __deferred__ sentinel, parent holds follow-up until resolution - Sub-agent _finalize delivers real response into parent's tool message via _receive_deferred_result - Cancellation propagates error result to parent when sub-agent is cancelled - Re-entrancy guard rejects sub-agent invocation when already busy - Attach caller reference to outgoing tool_call objects for bridge discovery - New awaiting_subagent task state while deferred calls are outstanding - Ordered message flushing preserves original tool_call sequence across deferred and sync results - Run envelope (_build_run_envelope) with tool history, conversation tail, chain metadata - Enriched final callback info with tool_calls, tool_results, run_envelope - Tool result normalization and best-effort tool_call_id repair for malformed rows - Serialization helpers for compact tool call/result previews in callbacks - Context grabber passes additional_files and audio_path through to API calls - Clean up _call_audio_path on finalize - Reset emptyCallbacks to generic stubs
  • added Gemini 3.x thinking_blocks round-trip support - updated compose.json
  • rename extension class from AgentV2EXT to AgentEXT - replace legacy v1 extension with agent_v2 codebase - add extends: util-agent-core, util-chained-callbacks - add tool approval gate (off/all/destructive/unknown) - add skills system with dynamic registry and profile support - add task state, token, and cost tracking dependencies - add CHOP channels for task state, token data, tool counts - add onToolApproval callback to emptyCallbacks template - add 5-page parameter layout (Agent, Model, Tools, Callbacks, About) - add I/O Preset system with composable input/output toggles - add Annotate I/O with persistent conversation mirroring - add model search filtering via ChatTD dynamic models - add Sendtopimage and Topimage for vision via TOP input - add Chain ID and Trace API Call tracking - add Limit Tokens toggle and raised MaxTokens ceiling - add Tool sequence with Toolop and Toolactive template pars - remove legacy tool_history_manager, conversation_processor_callbacks, script_model_info_callbacks, turn_table_format - bump manifest to 2.0.0 with refreshed description - add docs/compose.json, guide.md, tool_approval.md
v1.3.32026-03-01
  • Explicitly set tool_choice='auto' for Groq compatibility when tools enabled - Add budget enforcement before tool execution to drop calls after budget exhausted - Set tool_choice='none' on follow-up calls when budget exhausted for Groq compatibility - Persist tags across follow-up calls for trace grouping - Improve budget status logging in follow-up calls
  • Add par.Traceapicall toggle to exclude agent from trace generation - Pass trace_api_call to ChatTD Customapicall
  • Initial commit
v1.3.22025-09-01

CLEANED MENU - ADDED FORCE OPTION TO tool choice.

ADDED chainid parameter if readOnly then it is set automatically.

v1.3.12025-08-17
  • Added duplicate tool name detection with clear error messages and API call abortion
  • Fixed Claude/Anthropic follow-up call compatibility by providing tools when budget exhausted
  • Enhanced Logger component to handle both 2-parameter and 3-parameter calls flexibly
  • Implemented proper DEBUG/INFO/WARNING/ERROR filtering based on Showlogs parameter
  • Converted 95% of verbose logs from INFO to DEBUG level, keeping only critical information as INFO
  • Improved conversation cleanup to prevent tool call loops and invalid argument propagation
  • Fixed tool call deduplication to prevent infinite loops from LLMs generating identical calls
  • Enhanced streaming tool detection across different providers during responses
  • Improved tool history storage and cleanup to prevent memory leaks
  • Fixed Reset method to properly clear tool_history_unified table
  • Better chain ID generation and tracking across multi-turn conversations
  • Enhanced compatibility with agent session and orchestrator components
  • Streamlined callback execution with consolidated logging to reduce overhead
  • Improved async tool execution with better cancellation and cleanup
  • Enhanced error messages for configuration issues and tool failures
v1.3.02025-08-13
  • Multi-Turn Tool Calling: Added Toolbudget parameter for multiple LLM calls within single request
  • Parallel Tool Execution: Added Paralleltoolcalls parameter for simultaneous tool execution
  • Turn Table System: New turn_table DAT captures all conversation events (streaming, tool calls, results)
  • Chain ID Tracking: chain_id parameter for tracking related calls across conversations
  • Reasoning Model Support: Added Reasoninglevel parameter for thinking models (o1-preview, o1-mini)

## Improvements

  • Tool Call Detection: Better tool call detection across providers during streaming
  • Tool History Management: Unified tool tracking with proper call/result correlation
  • Streaming Architecture: Turn-based streaming with real-time turn table updates
  • Output System: Decoupled output formatting - external scripts process turn table data
  • Callback System: Enhanced callbacks with tool history and chain context

## Bug Fixes

  • Tool Call Parsing: Fixed tool call detection in various response formats
  • Streaming Integration: Fixed tools not being detected during streaming responses
  • Turn Boundaries: Fixed issues with multi-turn conversation boundaries
  • Memory Management: Better cleanup of tool-related data structures

## Breaking Changes

  • Turn Table Primary: Turn table is now primary source of conversation data (not output_dat)
  • New Parameters: Toolbudget and Paralleltoolcalls parameters added
  • Chain ID Required: Chain IDs now required for proper multi-turn tracking
v1.2.32025-07-24

🧹 Code Refactoring & Maintenance

  • Improved Tool Loading Robustness: The agent's parse_tools method has been made more resilient. It now gracefully handles GetTool-enabled operators that do not explicitly provide a response_format in their callback info, defaulting to "json". This prevents an entire tool from failing to load and improves backward compatibility with older or non-compliant tool operators.
v1.2.22025-07-22

🚀 New Features

  • Built-in Thinking Filter: Integrated the functionality of the ThinkingFilter LOP directly into the Agent. This allows for the removal of blocks from conversations and final responses without needing a separate operator.
    • Added Thinkingfilter (Filter Mode), Thinkingreplace (Replacement Text), and Thinkingphrases (Start/End Phrases) parameters to an "I/O" page.
    • The filter correctly processes both outgoing conversation history and incoming model responses, including real-time filtering of the output_dat during streaming.
  • UI Warning for Streaming + Tools: Added a dynamic parameter label system to warn users when both Streaming and Usetools are active, as this combination may be unstable. The parameter labels for both will change to include a warning symbol.

🐛 Critical Bug Fixes

  • Fixed Streaming Callback Flood: Resolved a critical bug where onTaskComplete and other finalization logic would execute on every single data chunk during a streaming response. The agent now correctly identifies the true final chunk, ensuring callbacks fire only once.
  • Restored Cancelcall for Streaming: The fix for the callback flood also resolved an issue where Cancelcall would fail during streaming because the current_api_call_id was being cleared prematurely. Cancellation now works as expected throughout the entire streaming process.
  • Corrected Tool Call Detection Structure: Fixed a structural parsing error where the agent would fail to find tool calls in the final chunk of a streaming response. The logic now correctly checks for tool calls in both response.choices[0].delta.tool_calls and response.choices[0].message.tool_calls, making tool detection more robust for different API response structures.

🧹 Code Refactoring & Maintenance

  • Removed Obsolete Parameters & Methods: Deprecated and removed the Outputmode and Conversationformat parameters, as their logic was superseded by direct handling within the HandleResponse method.
  • Disabled Dead Code: The obsolete methods associated with the old output modes (execute_output_mode, update_conversation_dat) have been neutralized to prevent confusion and improve code clarity.
v1.2.12025-07-20

🐛 Critical Bug Fix

  • Fixed Group Callback Mechanism: Resolved critical issue where group callbacks (used by orchestrator and other external systems) were not being executed
    • Root cause: Agent was storing callback info in self.group_callback and self.groupOP but looking for it in self.last_group_callback and self.last_groupOP
    • Solution: Added proper transfer of callback info to last_ variables in the Call method
    • Impact: Enables proper communication between Agent and orchestrator systems for multi-step workflows

    🔧 Technical Details

    • Modified Call method to properly transfer group callback information before API calls
    • This fix enables the Agent Orchestrator's autonomous mode to function correctly
    • Maintains backward compatibility with existing callback patterns
v1.2.02025-07-13

added the tool manager for tool logging and tool history.

v1.1.32025-07-01
  • Enhanced Tool Callbacks for Orchestration:
    • The HandleResponse callback method now includes a new, comprehensive agent_tool_history object in its callbackInfo dictionary whenever tools are used.
    • This object preserves the complete context of a tool interaction, including the initial tool_calls generated by the model and the final tool_results from execution.
    • This critically solves an issue where tool call information was being lost on agents that use the "Tool Follow-up" feature, enabling robust, stateful orchestration.
    • Improved State Management:
      • The agent's internal tool history is now cleared after every API call cycle. This ensures the agent remains stateless and prevents tool call information from one operation from leaking into the next.
      • This change reinforces the design pattern where long-term conversation and history management is the responsibility of a higher-level component (like the Agent Orchestrator).
      • Backwards Compatibility:
        • The existing tool_results key is still populated in the callbackInfo to ensure that older components relying on it continue to function without modification.
v1.1.22025-06-30

Agent Operator v1.1.2 - 20250630

This update focuses on a major refactoring of the agent's tool handling system, removing legacy code, improving robustness, and ensuring compatibility with modern, standardized tool providers.

#### ✨ Features & Enhancements

  • Generic Tool Result Handling: The _make_follow_up_call_with_tool_results method was completely overhauled. It no longer contains hardcoded logic for specific tools (like the old knowledge graph). It now intelligently formats any successful tool's dictionary output into a clean JSON string, making the agent compatible with any GetTool-based operator.
  • API-Aware Tool Roles: The agent now dynamically sets the message role for tool results ('function' for Gemini, 'tool' for others). This resolves a critical API incompatibility that was causing empty follow-up responses from the Gemini backend.
  • Streamlined Tool Parsing: Redundant calls to parse tools within the Call method have been eliminated. Tools are now parsed only once, improving efficiency and code readability.

#### 🐛 Bug Fixes

  • Robust Tool Loading: Fixed a KeyError: 'args' in the parse_tools method, allowing the agent to safely load newer tools that don't use the legacy args dictionary.
  • Corrected Follow-up Logic: Resolved a NameError and a subsequent critical logic flaw where processed tool results were being ignored in the final follow-up call to the LLM. The agent now correctly uses the processed content.

#### 🧹 Code Cleanup & Refactoring

  • Removed Legacy MCP Logic: All code related to the old MCP Client Manager, which was handled directly within the agent, has been removed. This aligns the agent with the new architecture where MCP clients are standard tool providers.
  • Removed Legacy Parameter Tool Handling: The specific logic for handling adjust_..._parameters tools within _call_tools_async has been removed, as this functionality is now managed by a dedicated Tool Parameter operator. The agent's tool execution method is now leaner and more focused.
v1.1.12025-05-12

added json mode

v1.1.02025-05-03

Moved to LiteLLM as backend.

Added improved model page + new model info display toggle.

Added image TOP parameter

Added audio support (for gemini multimodal + maybe some others)

v1.0.02024-11-06

Initial release