- OPERATORS
- CONTROLLERS
Agent
v2.0.0updatedThe Agent is the primary operator for sending prompts to large language models and receiving responses. It manages the full lifecycle of an LLM interaction: assembling conversations from input data, injecting system messages and context, sending API requests (with optional streaming), executing tool calls when the model requests them, and delivering final responses through multiple output formats.
Key Features
Section titled “Key Features”- Multi-provider LLM access through LiteLLM (OpenRouter, OpenAI, Groq, Ollama, Gemini, Anthropic, LM Studio, and custom endpoints)
- Streaming responses with real-time table updates
- Tool calling with multi-turn budget control and parallel execution
- Structured output with JSON schema validation
- Vision support via direct TOP image input or Context Grabber
- Audio file input for multimodal models
- Thinking tag filtering for reasoning models
- Prompt caching for cost reduction on supported providers
- Reasoning effort control for compatible models
- CHOP channel outputs for monitoring agent state, tool metrics, and callback events
Agent Tool Integration
Section titled “Agent Tool Integration”The Agent does not expose tools itself. Instead, it discovers and calls tools from other LOPs operators connected to its Tool sequence.
Use the Tool Debugger operator to inspect exact tool definitions, schemas, and parameters.
The Agent acts as the tool caller, not the tool provider. Any LOP with a GetTool() method can be wired into the Agent’s External Op Tools sequence on the Tools page. The Agent collects all available tools at call time, sends their schemas to the model, and routes tool calls back to the owning operator for execution.
Both single-tool operators (like Tool DAT or Search) and multi-tool providers (like MCP Client, which can expose dozens of tools from a single connection) are supported.
Input/Output
Section titled “Input/Output”Inputs
Section titled “Inputs”- Input 1 (DAT): A conversation table with columns
role,message,id,timestamp. Each row represents a message in the conversation. The agent reads this table whenCall Agentis pulsed. WhenCall on in1 Table Changeis enabled, the agent automatically triggers whenever this input table updates.
Outputs
Section titled “Outputs”The Agent has 4 outputs:
- Output 1: The
conversation_dattable — the full conversation including the assistant’s response appended after each call - Output 2: The
output_dattext DAT — the latest assistant response text (driven by the internal turn table formatter) - Output 3: The
history_table— a running log of every API call with model, tokens, timing, and response metadata - Output 4: The
turn_table— per-turn data collection showing the sequence of streaming chunks, tool calls, tool results, and responses within a single agent turn
Usage Examples
Section titled “Usage Examples”Basic Conversation
Section titled “Basic Conversation”- Place an Agent LOP in your network.
- Create a Table DAT with columns
role,message,id,timestamp. - Add a row with role
userand your prompt in the message column. - Wire the Table DAT into the Agent’s first input.
- On the Agent page, pulse
Call Agent. - The response appears on output 1 (conversation table) and output 2 (response text).
Selecting a Model
Section titled “Selecting a Model”The Agent supports three model selection modes, configured with Use Model From on the Model page:
- ChatTD (default): Uses the global model and API server configured in ChatTD. This is the simplest option and shares settings across all operators.
- Custom Model: Select a specific
API ServerandAI Modeldirectly on the Agent’s Model page. Use theSearchtoggle andModel Searchfield to filter long model lists. - Controller: Point the
Controllerparameter to another operator that provides model selection (useful for centralized model management across multiple agents).
Streaming Responses
Section titled “Streaming Responses”- On the Agent page, enable
Use Streaming. - Optionally enable
Update Table When Streamingto see the conversation table update in real time as chunks arrive. - Pulse
Call Agent. The response text on output 2 updates progressively as the model generates tokens.
Using Tools
Section titled “Using Tools”- On the Tools page, enable
Use LOP Tools. - In the External Op Tools sequence, add a row and drag a tool-providing operator (such as a Tool DAT, Search LOP, or MCP Client) into the
OPfield. - Set the tool’s
Activestate:- enabled: The model can choose to use the tool when appropriate.
- forced: The model must use this tool on the next call.
- disabled: The tool is ignored for this call.
- Set
Tool Turn Budgetto control how many rounds of tool calling are allowed before the agent must produce a final response. A budget of 1 means one round of tools followed by a response. Higher budgets allow the model to call tools, see results, and call more tools iteratively. - Enable
Tool Follow-up Response(on by default) so the agent makes a follow-up API call after tool execution to produce a natural language summary. When disabled, the agent stops after executing tools without generating a response. - Enable
Parallel Tool Callsif you want the model to request multiple tools simultaneously (when the provider supports it). Tools execute concurrently for faster results.
Adding Context
Section titled “Adding Context”On the Context page:
- Context Op: Wire a Context Grabber operator to inject additional context (text, images, files) into the conversation before sending to the model.
- Send TOP Image: Enable this and set the
TOP Imageparameter to directly send a TouchDesigner TOP as an image with your prompt. The model must support vision. - Use Audio: Enable and set
Audio Fileto send an audio file alongside the prompt for multimodal models that accept audio input.
On the Agent page, I/O Preset controls which local inputs become the next user message:
- All Inputs: Reads the Prompt parameter, input DAT text, and an enclosing Annotate when present.
- Prompt Parameter: Reads only the Agent’s Prompt parameter.
- Annotate: Reads the smallest enclosing Annotate and can mirror responses back into that Annotate.
Structured Output
Section titled “Structured Output”- On the I/O page, enable
Structured Output. - Create a Text DAT containing a valid JSON schema and wire it to the
Schema DATparameter. - The Agent will instruct the model to return responses conforming to your schema, using strict mode with the OpenAI response format specification.
This is useful for extracting structured data from LLM responses — for example, parsing sentiment scores, extracting entities, or generating configuration objects.
Filtering Thinking Tags
Section titled “Filtering Thinking Tags”Some reasoning models wrap their internal thought process in tags like <think>...</think>. On the I/O page:
- Set
Thinking Filter Modeto control where filtering applies:- Filter Conversation & Display: Removes thinking tags from both the stored conversation and the display output.
- Filter Conversation Only: Removes from the conversation history but keeps in display.
- Filter Display (out2): Keeps in conversation but removes from the display output.
- Customize
Thinking Phrasesif your model uses different delimiters (comma-separated start and end tags). - Optionally set
Thinking Replacement Textto substitute filtered content.
Output Modes
Section titled “Output Modes”On the I/O page, the Output Mode parameter controls how the agent delivers its response:
- conversation: Standard mode. The response is appended to the conversation table on output 1.
- table: Response is formatted into a table structure.
- parameter: Response is written to a parameter.
- custom: For advanced use cases with custom response handling.
Assign Perspective
Section titled “Assign Perspective”The Assign Perspective setting on the I/O page controls how input message roles are interpreted:
- user (default): Input roles are passed through as-is.
- assistant: Swaps user/assistant roles, useful when the agent should continue from the assistant’s perspective.
- third_party: Concatenates all input messages into a single user message.
Using the CHOP Outputs
Section titled “Using the CHOP Outputs”The Agent exposes its internal state as CHOP channels through an internal Script CHOP. These channels include callback events (on_task_start, on_task_complete, on_tool_call, on_task_error), agent state (agent_active, agent_streaming, task_idle, task_responding, etc.), tool metrics (tool_turns_used, tool_turn_budget, total_available_tools), and token counts (prompt_tokens, completion_tokens, total_tokens). Connect downstream CHOPs to monitor agent activity in real time.
Callbacks
Section titled “Callbacks”Beyond the standard onTaskStart and onTaskComplete, the Agent fires two additional callbacks:
onToolCall
Section titled “onToolCall”Fires when the model requests tool execution, before the tools actually run. The info dict contains a tool_calls list with each tool’s id, name, and arguments. Use this to log, filter, or intercept tool calls before execution.
onTaskError
Section titled “onTaskError”Fires when an API call fails or tool execution encounters an error. The info dict contains an error object with type, message, code, and model fields. The Agent automatically formats common LiteLLM errors (authentication failures, rate limits, context window exceeded, service unavailable) into user-friendly messages.
Best Practices
Section titled “Best Practices”- Start with ChatTD model selection for quick setup, then switch to Custom Model when you need per-agent model control.
- Set Max Tokens appropriately on the Model page. The default of 256 is conservative — increase it for longer responses.
- Use Tool Turn Budget wisely. A budget of 1 is sufficient for most single-tool workflows. Increase to 3-5 for complex agentic tasks where the model needs to gather information iteratively.
- Enable Prompt Caching on the I/O page when making repeated calls with similar conversation history. This reduces costs significantly on providers like Anthropic.
- Use the Chain ID parameter when integrating with orchestration systems. Setting a consistent Chain ID groups related API calls together for tracing and analytics.
- Monitor with CHOP channels rather than polling parameters. The CHOP output provides reactive state updates at 60fps.
Troubleshooting
Section titled “Troubleshooting”- “Duplicate tool name detected”: Two operators in the Tool sequence expose tools with the same name. Remove one operator or reconfigure tool names to be unique.
- Tool calls not executing: Verify
Use LOP Toolsis enabled on the Tools page and that tool operators are wired into theExternal Op Toolssequence with theirActivestate set toenabledorforced. - Empty responses: Check that
Max Tokenson the Model page is set high enough. Very low values can cause truncated or empty responses. - Rate limit errors: The Agent surfaces provider-specific rate limit messages. Wait a moment and retry, or switch to a different provider/model.
- Model not supporting images: If you see “content must be a string” errors, the selected model does not support multimodal input. Switch to a vision-capable model.
- Streaming interruptions: Mid-stream errors are automatically reported. Check your network connection and the provider’s service status.
- Tool budget exhausted: If the model keeps requesting tools but the budget is used up, increase
Tool Turn Budgetor simplify the task so fewer tool rounds are needed.
Parameters
Section titled “Parameters”op('agent').par.Agentstatus Str - Default:
"" (Empty String)
op('agent').par.Active Toggle - Default:
False
op('agent').par.Prompt Str - Default:
"" (Empty String)
op('agent').par.Call Pulse - Default:
False
op('agent').par.Cancelcall Pulse - Default:
False
op('agent').par.Contextop OP - Default:
"" (Empty String)
op('agent').par.Useprompt Toggle - Default:
True
op('agent').par.Useinputs Toggle - Default:
True
op('agent').par.Useannotate Toggle - Default:
True
op('agent').par.Sendtopimage Toggle - Default:
False
op('agent').par.Topimage TOP - Default:
"" (Empty String)
op('agent').par.Outputtoannotate Toggle - Default:
False
op('agent').par.Structuredoutput Toggle - Default:
False
op('agent').par.Schemadat DAT - Default:
"" (Empty String)
op('agent').par.Schemaname Str - Default:
"" (Empty String)
op('agent').par.Thinkingphrases Str - Default:
"" (Empty String)
op('agent').par.Sessionid Str - Default:
"" (Empty String)
op('agent').par.Clearsession Pulse - Default:
False
op('agent').par.Savesession Pulse - Default:
False
op('agent').par.Loadsession File - Default:
"" (Empty String)
op('agent').par.Chainid Str - Default:
"" (Empty String)
op('agent').par.Traceapicall Toggle - Default:
True
op('agent').par.Modelcontroller OP - Default:
"" (Empty String)
op('agent').par.Search Toggle - Default:
False
op('agent').par.Modelsearch Str - Default:
"" (Empty String)
op('agent').par.Usesystemmessage Toggle - Default:
True
op('agent').par.Systemmessagedat DAT - Default:
./system_prompt
op('agent').par.Displaysysmess Str - Default:
"" (Empty String)
op('agent').par.Editsysmess Pulse - Default:
False
op('agent').par.Streaming Toggle - Default:
True
op('agent').par.Limittokens Toggle - Default:
False
op('agent').par.Maxtokens Int - Default:
4096- Range:
- 1 to 128000
- Slider Range:
- 1 to 128000
op('agent').par.Temperature Float - Default:
0.7- Range:
- 0 to 2
- Slider Range:
- 0 to 2
op('agent').par.Enablepromptcaching Toggle - Default:
False
op('agent').par.Paralleltoolcalls Toggle - Default:
False
op('agent').par.Maxmessages Int Trim oldest messages to stay under limit (0 = unlimited)
- Default:
0- Range:
- 0 to 500
- Slider Range:
- 0 to 500
op('agent').par.Maxresultchars Int Truncate tool results exceeding this size (0 = unlimited)
- Default:
16000- Range:
- 0 to 100000
- Slider Range:
- 0 to 100000
Provider Model Documentation
Consult the documentation for your chosen provider to find supported models, API key information, and usage limits.
View LiteLLM Supported Providers →
op('agent').par.Usetools Toggle - Default:
True
op('agent').par.Toolresultstatus Str - Default:
"" (Empty String)
op('agent').par.Toolturnbudget Int Max tool turns per call
- Default:
10- Range:
- 1 to 25
- Slider Range:
- 1 to 25
op('agent').par.Toolfollowup Toggle Generate response after tool execution
- Default:
True
op('agent').par.Enablegettool Toggle - Default:
False
op('agent').par.Toolname Str - Default:
"" (Empty String)
op('agent').par.Tooldescription Str - Default:
Agent sub-task executor
op('agent').par.Tool Sequence - Default:
0
op('agent').par.Tool0toolop OP test help 1
- Default:
"" (Empty String)
op('agent').par.Pendingtools Str - Default:
"" (Empty String)
op('agent').par.Approvetools Pulse - Default:
False
op('agent').par.Denytools Pulse - Default:
False
op('agent').par.Approvaltimeout Int Auto-deny after N seconds (0 = wait forever)
- Default:
0- Range:
- 0 to 600
- Slider Range:
- 0 to 600
op('agent').par.Costbudget Float Session cost limit in USD (0 = unlimited)
- Default:
1.0- Range:
- 0 to 10
- Slider Range:
- 0 to 10
op('agent').par.Resetcostmeter Pulse - Default:
False
op('agent').par.Forcefinalresponse Toggle When tool budget exhausted with no text, send one more call without tools
- Default:
True
op('agent').par.Budgetexhaustedprompt Str Injected as user message on final forced call (leave empty to just strip tools)
- Default:
Tool turn budget reached. Summarize your findings so far.
Skills
Section titled “Skills”op('agent').par.Skillscount Str - Default:
"" (Empty String)
op('agent').par.Useskills Toggle - Default:
False
op('agent').par.Skillsfolder Folder - Default:
"" (Empty String)
op('agent').par.Skillscomp OP - Default:
"" (Empty String)
op('agent').par.Scanskills Pulse - Default:
False
Profiles
Section titled “Profiles”op('agent').par.Profilescount Str - Default:
"" (Empty String)
op('agent').par.Useprofiles Toggle - Default:
False
op('agent').par.Profilesfolder Folder - Default:
"" (Empty String)
op('agent').par.Scanprofiles Pulse - Default:
False
op('agent').par.Applyprofiles Pulse - Default:
False
op('agent').par.Profile Sequence - Default:
0
op('agent').par.Displayname Str Friendly name for UI, dashboards, event sinks, and agent swarm traces. Profiles may set this value.
- Default:
"" (Empty String)
op('agent').par.Displaycolorr RGB Identity color for the operator tile, compact panels, dashboards, and profile-driven UI.
- Default:
0.98- Range:
- 0 to 1
- Slider Range:
- 0 to 1
op('agent').par.Displaycolorg RGB Identity color for the operator tile, compact panels, dashboards, and profile-driven UI.
- Default:
0.52- Range:
- 0 to 1
- Slider Range:
- 0 to 1
op('agent').par.Displaycolorb RGB Identity color for the operator tile, compact panels, dashboards, and profile-driven UI.
- Default:
0.02- Range:
- 0 to 1
- Slider Range:
- 0 to 1
op('agent').par.Exposechat Toggle When On, LOP Studio IO discovers this agent for dashboard chat.
- Default:
False
Callbacks
Section titled “Callbacks”op('agent').par.Callbackdat DAT - Default:
"" (Empty String)
op('agent').par.Printcallbacks Toggle - Default:
False
op('agent').par.Createcallbacks Pulse - Default:
False
op('agent').par.Ontaskstart Toggle - Default:
True
op('agent').par.Ontaskcomplete Toggle - Default:
True
op('agent').par.Ontaskerror Toggle - Default:
True
op('agent').par.Ontoolcall Toggle - Default:
True
op('agent').par.Ontoolapproval Toggle - Default:
True
op('agent').par.Onsessionclear Toggle - Default:
True
op('agent').par.Onsessionsave Toggle - Default:
True
op('agent').par.Onsessionload Toggle - Default:
True
op('agent').par.Oncostbudget Toggle - Default:
True
Changelog
Section titled “Changelog”v2.0.02026-05-02
- added GetState/GetTranscript public API for dashboard/swarm consumption - added event sink system (RegisterEventSink, lifecycle events) - added ghost callback guard and stale Active par reset on reinit - added display name, display color, UI behavior, UI start mode, Exposechat profile pars - added Useskills/Skillssource and Useprofiles/Profilesource tier controls with enableExpr - added Toolresultstatus readout for tool error/warning tracking - added _prepare_for_release cleanup method - removed Annotateops/Maxannotateops (speculative annotate context) - renamed I/O preset labels to All Inputs / Prompt Parameter - updated compose.json refs (rank_fusion to search_merge, agent_session removal) - updated category to Controllers
- Deferred tool-result bridge for agent-as-tool support - Sub-agents return __deferred__ sentinel, parent holds follow-up until resolution - Sub-agent _finalize delivers real response into parent's tool message via _receive_deferred_result - Cancellation propagates error result to parent when sub-agent is cancelled - Re-entrancy guard rejects sub-agent invocation when already busy - Attach caller reference to outgoing tool_call objects for bridge discovery - New awaiting_subagent task state while deferred calls are outstanding - Ordered message flushing preserves original tool_call sequence across deferred and sync results - Run envelope (_build_run_envelope) with tool history, conversation tail, chain metadata - Enriched final callback info with tool_calls, tool_results, run_envelope - Tool result normalization and best-effort tool_call_id repair for malformed rows - Serialization helpers for compact tool call/result previews in callbacks - Context grabber passes additional_files and audio_path through to API calls - Clean up _call_audio_path on finalize - Reset emptyCallbacks to generic stubs
- added Gemini 3.x thinking_blocks round-trip support - updated compose.json
- rename extension class from AgentV2EXT to AgentEXT - replace legacy v1 extension with agent_v2 codebase - add extends: util-agent-core, util-chained-callbacks - add tool approval gate (off/all/destructive/unknown) - add skills system with dynamic registry and profile support - add task state, token, and cost tracking dependencies - add CHOP channels for task state, token data, tool counts - add onToolApproval callback to emptyCallbacks template - add 5-page parameter layout (Agent, Model, Tools, Callbacks, About) - add I/O Preset system with composable input/output toggles - add Annotate I/O with persistent conversation mirroring - add model search filtering via ChatTD dynamic models - add Sendtopimage and Topimage for vision via TOP input - add Chain ID and Trace API Call tracking - add Limit Tokens toggle and raised MaxTokens ceiling - add Tool sequence with Toolop and Toolactive template pars - remove legacy tool_history_manager, conversation_processor_callbacks, script_model_info_callbacks, turn_table_format - bump manifest to 2.0.0 with refreshed description - add docs/compose.json, guide.md, tool_approval.md
v1.3.32026-03-01
- Explicitly set tool_choice='auto' for Groq compatibility when tools enabled - Add budget enforcement before tool execution to drop calls after budget exhausted - Set tool_choice='none' on follow-up calls when budget exhausted for Groq compatibility - Persist tags across follow-up calls for trace grouping - Improve budget status logging in follow-up calls
- Add par.Traceapicall toggle to exclude agent from trace generation - Pass trace_api_call to ChatTD Customapicall
- Initial commit
v1.3.22025-09-01
CLEANED MENU - ADDED FORCE OPTION TO tool choice.
ADDED chainid parameter if readOnly then it is set automatically.
v1.3.12025-08-17
- Added duplicate tool name detection with clear error messages and API call abortion
- Fixed Claude/Anthropic follow-up call compatibility by providing tools when budget exhausted
- Enhanced Logger component to handle both 2-parameter and 3-parameter calls flexibly
- Implemented proper DEBUG/INFO/WARNING/ERROR filtering based on Showlogs parameter
- Converted 95% of verbose logs from INFO to DEBUG level, keeping only critical information as INFO
- Improved conversation cleanup to prevent tool call loops and invalid argument propagation
- Fixed tool call deduplication to prevent infinite loops from LLMs generating identical calls
- Enhanced streaming tool detection across different providers during responses
- Improved tool history storage and cleanup to prevent memory leaks
- Fixed Reset method to properly clear tool_history_unified table
- Better chain ID generation and tracking across multi-turn conversations
- Enhanced compatibility with agent session and orchestrator components
- Streamlined callback execution with consolidated logging to reduce overhead
- Improved async tool execution with better cancellation and cleanup
- Enhanced error messages for configuration issues and tool failures
v1.3.02025-08-13
- Multi-Turn Tool Calling: Added
Toolbudgetparameter for multiple LLM calls within single request - Parallel Tool Execution: Added
Paralleltoolcallsparameter for simultaneous tool execution - Turn Table System: New
turn_tableDAT captures all conversation events (streaming, tool calls, results) - Chain ID Tracking:
chain_idparameter for tracking related calls across conversations - Reasoning Model Support: Added
Reasoninglevelparameter for thinking models (o1-preview, o1-mini)
## Improvements
- Tool Call Detection: Better tool call detection across providers during streaming
- Tool History Management: Unified tool tracking with proper call/result correlation
- Streaming Architecture: Turn-based streaming with real-time turn table updates
- Output System: Decoupled output formatting - external scripts process turn table data
- Callback System: Enhanced callbacks with tool history and chain context
## Bug Fixes
- Tool Call Parsing: Fixed tool call detection in various response formats
- Streaming Integration: Fixed tools not being detected during streaming responses
- Turn Boundaries: Fixed issues with multi-turn conversation boundaries
- Memory Management: Better cleanup of tool-related data structures
## Breaking Changes
- Turn Table Primary: Turn table is now primary source of conversation data (not output_dat)
- New Parameters:
ToolbudgetandParalleltoolcallsparameters added - Chain ID Required: Chain IDs now required for proper multi-turn tracking
v1.2.32025-07-24
🧹 Code Refactoring & Maintenance
- Improved Tool Loading Robustness: The agent's
parse_toolsmethod has been made more resilient. It now gracefully handlesGetTool-enabled operators that do not explicitly provide aresponse_formatin their callback info, defaulting to"json". This prevents an entire tool from failing to load and improves backward compatibility with older or non-compliant tool operators.
v1.2.22025-07-22
🚀 New Features
- Built-in Thinking Filter: Integrated the functionality of the
ThinkingFilterLOP directly into the Agent. This allows for the removal ofblocks from conversations and final responses without needing a separate operator. - Added
Thinkingfilter(Filter Mode),Thinkingreplace(Replacement Text), andThinkingphrases(Start/End Phrases) parameters to an "I/O" page. - The filter correctly processes both outgoing conversation history and incoming model responses, including real-time filtering of the
output_datduring streaming. - UI Warning for Streaming + Tools: Added a dynamic parameter label system to warn users when both
StreamingandUsetoolsare active, as this combination may be unstable. The parameter labels for both will change to include a warning symbol.
🐛 Critical Bug Fixes
- Fixed Streaming Callback Flood: Resolved a critical bug where
onTaskCompleteand other finalization logic would execute on every single data chunk during a streaming response. The agent now correctly identifies the true final chunk, ensuring callbacks fire only once. - Restored
Cancelcallfor Streaming: The fix for the callback flood also resolved an issue whereCancelcallwould fail during streaming because thecurrent_api_call_idwas being cleared prematurely. Cancellation now works as expected throughout the entire streaming process. - Corrected Tool Call Detection Structure: Fixed a structural parsing error where the agent would fail to find tool calls in the final chunk of a streaming response. The logic now correctly checks for tool calls in both
response.choices[0].delta.tool_callsandresponse.choices[0].message.tool_calls, making tool detection more robust for different API response structures.
🧹 Code Refactoring & Maintenance
- Removed Obsolete Parameters & Methods: Deprecated and removed the
OutputmodeandConversationformatparameters, as their logic was superseded by direct handling within theHandleResponsemethod. - Disabled Dead Code: The obsolete methods associated with the old output modes (
execute_output_mode,update_conversation_dat) have been neutralized to prevent confusion and improve code clarity.
v1.2.12025-07-20
🐛 Critical Bug Fix
- Fixed Group Callback Mechanism: Resolved critical issue where group callbacks (used by orchestrator and other external systems) were not being executed
- Root cause: Agent was storing callback info in
self.group_callbackandself.groupOPbut looking for it inself.last_group_callbackandself.last_groupOP - Solution: Added proper transfer of callback info to
last_variables in theCallmethod - Impact: Enables proper communication between Agent and orchestrator systems for multi-step workflows
- Modified
Callmethod to properly transfer group callback information before API calls - This fix enables the Agent Orchestrator's autonomous mode to function correctly
- Maintains backward compatibility with existing callback patterns
🔧 Technical Details
v1.2.02025-07-13
added the tool manager for tool logging and tool history.
v1.1.32025-07-01
- Enhanced Tool Callbacks for Orchestration:
- The
HandleResponsecallback method now includes a new, comprehensiveagent_tool_historyobject in itscallbackInfodictionary whenever tools are used. - This object preserves the complete context of a tool interaction, including the initial
tool_callsgenerated by the model and the finaltool_resultsfrom execution. - This critically solves an issue where tool call information was being lost on agents that use the "Tool Follow-up" feature, enabling robust, stateful orchestration.
- Improved State Management:
- The agent's internal tool history is now cleared after every API call cycle. This ensures the agent remains stateless and prevents tool call information from one operation from leaking into the next.
- This change reinforces the design pattern where long-term conversation and history management is the responsibility of a higher-level component (like the
Agent Orchestrator). - Backwards Compatibility:
- The existing
tool_resultskey is still populated in thecallbackInfoto ensure that older components relying on it continue to function without modification.
v1.1.22025-06-30
Agent Operator v1.1.2 - 20250630
This update focuses on a major refactoring of the agent's tool handling system, removing legacy code, improving robustness, and ensuring compatibility with modern, standardized tool providers.
#### ✨ Features & Enhancements
- Generic Tool Result Handling: The
_make_follow_up_call_with_tool_resultsmethod was completely overhauled. It no longer contains hardcoded logic for specific tools (like the old knowledge graph). It now intelligently formats any successful tool's dictionary output into a clean JSON string, making the agent compatible with anyGetTool-based operator. - API-Aware Tool Roles: The agent now dynamically sets the message role for tool results (
'function'for Gemini,'tool'for others). This resolves a critical API incompatibility that was causing empty follow-up responses from the Gemini backend. - Streamlined Tool Parsing: Redundant calls to parse tools within the
Callmethod have been eliminated. Tools are now parsed only once, improving efficiency and code readability.
#### 🐛 Bug Fixes
- Robust Tool Loading: Fixed a
KeyError: 'args'in theparse_toolsmethod, allowing the agent to safely load newer tools that don't use the legacyargsdictionary. - Corrected Follow-up Logic: Resolved a
NameErrorand a subsequent critical logic flaw where processed tool results were being ignored in the final follow-up call to the LLM. The agent now correctly uses the processed content.
#### 🧹 Code Cleanup & Refactoring
- Removed Legacy MCP Logic: All code related to the old MCP Client Manager, which was handled directly within the agent, has been removed. This aligns the agent with the new architecture where MCP clients are standard tool providers.
- Removed Legacy Parameter Tool Handling: The specific logic for handling
adjust_..._parameterstools within_call_tools_asynchas been removed, as this functionality is now managed by a dedicatedTool Parameteroperator. The agent's tool execution method is now leaner and more focused.
v1.1.12025-05-12
added json mode
v1.1.02025-05-03
Moved to LiteLLM as backend.
Added improved model page + new model info display toggle.
Added image TOP parameter
Added audio support (for gemini multimodal + maybe some others)
v1.0.02024-11-06
Initial release