Skip to content

Fal.ai Operator

The fal.ai operator provides a direct interface to fal.ai’s powerful image and video generation models, allowing you to create, transform, and enhance visual content using state-of-the-art AI like FLUX, Stable Diffusion, and more.

A key feature of this operator is its ability to dynamically generate parameters. When you select an Endpoint ID, the operator fetches the corresponding API schema from fal.ai and automatically creates the necessary input parameters for that specific model on the ‘API Parameters’ page. This ensures the interface always matches the requirements of the chosen AI service, whether it’s for text-to-image, image-to-image, or video generation.

🔧 GetTool Enabled 1 tool

This operator exposes 1 tool that allow Agent and Gemini Live LOPs to generate images and videos using fal.ai's AI models with dynamically filtered parameters based on the selected endpoint and parameter filter settings.

The fal.ai operator exposes a fal_ai_generate tool that allows AI agents to generate content using any configured fal.ai endpoint. The operator dynamically filters which parameters are exposed to agents based on the “GetTool Par Filter” setting, providing flexible control over agent access to model parameters.

To use the fal.ai operator, you need the fal-client Python package.

  • Install Dependencies: If you see errors or the operator doesn’t function, click the Install Dependencies pulse parameter on the ‘API’ page to install the required Python packages using the project’s python_manager.

To find the correct endpoint ID for your desired AI model, you have several options:

  1. Official fal.ai Website: Visit fal.ai and browse the model gallery
  2. Model Gallery: Navigate to fal.ai/models to explore available models
  3. Documentation: Check docs.fal.ai for comprehensive API documentation

Once you’ve found a model you want to use:

  1. Log into your fal.ai account and navigate to the desired model page
  2. Look for the API tab or documentation section on the model page
  3. Copy the endpoint ID - it typically follows the format fal-ai/model-name (e.g., fal-ai/flux-dev, fal-ai/kling-video/v2/master/image-to-video)
  4. Paste the endpoint ID into the Endpoint ID parameter in the fal.ai operator

Important: fal.ai pricing is based on the models used within workflows, not per workflow. Costs vary significantly depending on the type of content generated.

  • Image Models: Typically charged per megapixel (e.g., ~$0.003-$0.05 per megapixel)
  • Video Models: Usually charged per video second or per video (e.g., ~$0.095-$0.5+ per video/second)
  • Complex Workflows: May use multiple models, with total cost being the sum of each model’s pricing

Popular Image Models:

  • FLUX.1 [schnell]: ~$0.003 per megapixel (fastest, lowest cost)
  • FLUX.1 [dev]: ~$0.025 per megapixel (balanced quality/cost)
  • FLUX.1 [pro]: ~$0.05 per megapixel (highest quality)

Popular Video Models:

  • Kling v1.6: ~$0.095 per video second
  • Kling 2.1 Master: ~$0.28 per video second
  • MiniMax Video-01 Live: ~$0.5 per video
  • Wan-2.1 Pro: ~$0.4 per video
  1. Monitor Usage: Regularly check your fal.ai dashboard for credit consumption
  2. Start Small: Test with lower-cost models like FLUX.1 [schnell] before using expensive video models
  3. Video Workflows: Be especially cautious with video generation - costs can add up quickly
  4. Check Model Pages: Visit individual model pages for the most current pricing information
  5. Set Budgets: Consider setting usage alerts or budgets in your fal.ai account settings
  • Pre-purchase Required: You must buy credits in advance before using any models
  • Credit Expiration: Credits expire after 365 days (or 90 days for promotional credits)
  • No Refunds: Credits are non-refundable and non-transferable

For the most current and detailed pricing information, always refer to:

  • Main Pricing Page: fal.ai/pricing
  • Individual Model Pages: Each model page shows specific pricing details
  • Support: Contact support@fal.ai for pricing on unlisted models

This operator does not use standard wire inputs. Instead, it uses TOPs connected directly to its parameters for image-based models.

  • Dynamic TOP Inputs: For models that require image inputs (e.g., image-to-image, inpainting), the operator will dynamically create TOP type parameters on the ‘API Parameters’ page. You can connect any TOP (like a Movie File In or Ramp) to these parameters. The operator automatically converts the texture data to the required format for the API.
  • out1 (TOP): The generated image or video output as a texture. When Auto Save is enabled, this TOP displays the most recently saved local file.
  • out2 (Text DAT): Contains the full, formatted JSON response from the fal.ai API call.
  • out3 (Table DAT): A log of all executions, showing timestamps, status, duration, and results.
Status (Status) op('fal_ai').par.Status Str
Default:
"" (Empty String)
Active (Active) op('fal_ai').par.Active Toggle
Default:
Off
Execute / Call (Execute) op('fal_ai').par.Execute Pulse
Default:
None
GetTool Par Filter (Apiparfilter) op('fal_ai').par.Apiparfilter Str
Default:
prompt
API Parameters Header

The following parameters are examples of what appears in the API Parameters section after selecting an endpoint like fal-ai/flux/dev. These parameters are automatically generated from the fal.ai API schema:

Prompt (Dynprompt) op('fal_ai').par.Dynprompt Str
Default:
"" (Empty String)
Image Size (Dynimagesize) op('fal_ai').par.Dynimagesize Menu
Default:
landscape_4_3
Num Inference Steps (Dynnuminfsteps) op('fal_ai').par.Dynnuminfsteps Int
Default:
28
Seed (Dynseed) op('fal_ai').par.Dynseed Int
Default:
0
Guidance Scale (Dynguidance) op('fal_ai').par.Dynguidance Float
Default:
3.5
Enable Safety Checker (Dynenablesafetychecker) op('fal_ai').par.Dynenablesafetychecker Toggle
Default:
true

Note: The exact parameters depend on the selected endpoint. Image-to-image models will include additional parameters for input images, while other models may have different parameter sets entirely.

API Key (Apikey) op('fal_ai').par.Apikey Str
Default:
"" (Empty String)
Endpoint ID (Endpoint) op('fal_ai').par.Endpoint StrMenu
Default:
fal-ai/flux/dev
Refresh Endpoint (Refreshendpoint) op('fal_ai').par.Refreshendpoint Pulse
Default:
None
Discover Endpoints (Discoverendpoints) op('fal_ai').par.Discoverendpoints Pulse
Default:
None
Install Dependencies (Installdependencies) op('fal_ai').par.Installdependencies Pulse
Default:
None
Enable Execution (Enableexecution) op('fal_ai').par.Enableexecution Toggle
Default:
On
Execution Mode (Executionmode) op('fal_ai').par.Executionmode Menu
Default:
sync
Auto Save Images (Autosave) op('fal_ai').par.Autosave Toggle
Default:
Off
Save Path (Savepath) op('fal_ai').par.Savepath Str
Default:
project.folder + "/fal_ai_images/"
Save Format (Saveformat) op('fal_ai').par.Saveformat Menu
Default:
png
Save Quality (Savequality) op('fal_ai').par.Savequality Float
Default:
0.9
Range:
0 to 1
Naming Pattern (Savenamingpattern) op('fal_ai').par.Savenamingpattern Str
Default:
fal_ai_{timestamp}_{endpoint}
Save Base64 in JSON (Savebase64injson) op('fal_ai').par.Savebase64injson Toggle
Default:
Off
  1. Create a fal.ai operator.
  2. On the API page, select an image generation Endpoint ID, like fal-ai/flux/dev.
  3. Go to the API Parameters page. Wait a moment for the dynamic parameters to appear.
  4. Find the Dynprompt parameter and enter your desired text (e.g., “A futuristic city at sunset, cyberpunk style”).
  5. Click the Execute / Call pulse.
  6. The generated image appears in the out1 TOP.
  1. Select an Endpoint ID that supports image inputs, such as fal-ai/flux/dev/image-to-image.
  2. On the API Parameters page, a new TOP parameter (e.g., Dyninputimage) will appear.
  3. Connect an image TOP (e.g., a Movie File In TOP) to this new parameter.
  4. Adjust other parameters like Dynprompt to guide the transformation.
  5. Click Execute / Call. The transformed image will appear in the out1 TOP.
  1. Select a video generation Endpoint ID.
  2. Configure the dynamic parameters (e.g., text prompt, seed).
  3. Enable Auto Save on the Auto Save page to automatically save the resulting video file.
  4. Click Execute / Call.
  5. The operator will download the generated video and the out1 TOP will display it once saved. The local file path will appear in the internal resultsTable.

The fal.ai operator uses sophisticated parameter filtering to control which dynamically generated parameters are exposed to AI agents through the GetTool system. This filtering is controlled by the GetTool Par Filter parameter and supports advanced pattern matching:

The parameter filter uses Unix shell-style wildcards with the following features:

  • * - Matches any number of characters (including none)
  • ? - Matches exactly one character
  • ^pattern - Excludes parameters matching the pattern (negation)
  • Multiple patterns - Separate with spaces for OR logic
Filter PatternDescriptionMatches
*All parametersprompt, seed, guidance, image_size, etc.
promptExact matchprompt only
*prompt*Contains “prompt”prompt, negative_prompt, style_prompt
seed guidanceMultiple specificseed OR guidance
*size*Contains “size”image_size, canvas_size, output_size
* ^seedAll except seedAll parameters except those matching “seed”
*prompt ^negative*Prompts but not negativeprompt, style_prompt (but not negative_prompt)

This filtering system allows you to:

  1. Control Agent Access - Limit which parameters agents can modify
  2. Simplify Tool Interface - Expose only relevant parameters to reduce complexity
  3. Maintain Security - Hide sensitive parameters from agent control
  4. Optimize Performance - Reduce tool schema size for faster agent processing

The default filter prompt exposes only prompt-related parameters, providing a safe starting point for agent integration.

  • API Keys: The operator securely manages your API key. You can enter it once in the Apikey parameter, and it will be stored for future use, preferably in the central ChatTD KeyManager if available.
  • Dynamic Resolution: For image models, the operator can automatically detect the output resolution from the API response and update a get_output operator in the network to match, simplifying workflows.