The Nadoo AI Agent Node supports six execution modes (strategies) that determine how the LLM reasons, uses tools, and refines its output. Choosing the right mode for your task is one of the most impactful decisions you make when building a workflow.This page helps you understand when to use each mode, how they compare, and how to configure them.
Customer greeting bot: The user says hello, and the agent responds with a friendly greeting and a list of things it can help with. No tools or complex reasoning needed.
Forces the LLM to show its reasoning before answering. The model breaks the problem into explicit steps, works through each one, and then produces a final answer.
{ "agent_mode": "chain_of_thought", "model": "gpt-4o", "system_prompt": "You are a math tutor. Always show your work.", "cot_config": { "strategy": "step_by_step", "max_steps": 10, "show_reasoning": true }}
Financial analysis bot: A user asks “Should I refinance my mortgage?” The agent breaks this into sub-questions (current rate vs. new rate, closing costs, break-even timeline), works through each calculation, and provides a reasoned recommendation.
Implements the Reasoning + Acting loop. The LLM thinks about what it needs to do, selects a tool to gather information, observes the result, and repeats until it can provide a final answer.
Travel research agent: A user asks “What’s the best time to visit Japan for cherry blossoms, and how much would flights cost from New York?” The agent searches for cherry blossom season dates, then searches for flight prices for those dates, and synthesizes the results into a recommendation.
Uses the LLM’s native function calling API for structured, type-safe tool execution. The model receives JSON schemas describing available tools and returns structured calls that are executed by the runtime.
Project management agent: A user says “Create a task for the homepage redesign, assign it to Sarah, and set the deadline to next Friday.” The agent calls create_task with structured arguments {"title": "Homepage redesign", "assignee": "Sarah", "deadline": "2026-03-13"}.
The LLM generates a response, then evaluates and critiques its own output against a set of criteria, and iteratively improves it. This produces noticeably higher quality for tasks where the first draft is rarely good enough.
Documentation writer: A user asks the agent to write API documentation for an endpoint. The agent drafts the docs, then evaluates them for accuracy (does the description match the schema?), clarity (is it easy to understand?), completeness (are all parameters documented?), and tone (is it professional?). It revises weak areas until all criteria pass.
Explores multiple reasoning paths simultaneously, evaluates each path, prunes unpromising ones, and selects the best result. This is the most computationally expensive mode but produces the best outcomes for problems with multiple valid approaches.
Marketing strategy agent: A user asks “How should we launch our new product?” The agent generates three initial strategies (influencer campaign, content marketing, paid ads), evaluates each on feasibility, cost, and expected ROI, prunes the weakest, then expands the surviving strategies with detailed execution plans before selecting the best overall approach.