Overview
ReAct (Reasoning + Acting) mode implements an iterative loop where the LLM alternates between reasoning about the problem and taking actions (tool calls) to gather information. After each action, the model observes the result and decides whether to take another action or produce a final answer. This mode is essential for tasks that require real-time data, multi-step research, or adaptive problem solving where the agent must decide its own strategy based on intermediate results.How It Works
Thought
The LLM reasons about the current state of the problem. What information is missing? What should it do next? This reasoning is emitted as an
agent_iteration SSE event.Action
Based on its reasoning, the LLM selects a tool and specifies the arguments. The tool selection is communicated via an
agent_tool_call SSE event.Observation
The tool executes and returns its result. The result is fed back to the LLM as an observation, emitted as an
agent_tool_result SSE event.Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
agent_mode | string | — | Must be "react" |
react_config.max_iterations | number | 10 | Maximum Think-Act-Observe cycles before forced termination |
react_config.tools | string[] | [] | List of tool names the agent can use |
react_config.early_stop | boolean | true | Allow the agent to stop before max_iterations when it has enough info |
The Think-Act-Observe Loop
Each iteration of the ReAct loop follows a strict format that the LLM is trained to produce:Tools
Thetools array specifies which tools the agent can use during execution. Tools must be registered in the platform and can include:
| Tool Type | Examples | Description |
|---|---|---|
| Built-in tools | web_search, calculator | Platform-provided utilities |
| Knowledge search | knowledge_search | Search against Nadoo knowledge bases |
| Custom plugins | my_plugin_tool | Tools from installed plugins |
| MCP tools | mcp_server_tool | Tools from connected MCP servers |
SSE Events
ReAct mode emits rich streaming events that enable detailed progress tracking:| Event | When | Payload |
|---|---|---|
node_started | Node begins | { node_id } |
agent_iteration | Each Think step | { iteration, thought, node_id } |
agent_tool_call | Agent selects a tool | { tool_name, arguments, iteration, node_id } |
agent_tool_result | Tool returns result | { tool_name, result, iteration, node_id } |
llm_token | Each token generated | { token, node_id } |
llm_finished | Final answer generated | { node_id, total_tokens } |
node_finished | Node completes | { node_id, status, iterations_used } |
Max Iterations and Early Stop
max_iterations
Themax_iterations parameter is a safety limit. When reached:
- The agent is forced to produce a final answer with whatever information it has gathered
- The response may include a note that it could not fully complete the research
- The
node_finishedevent includesiterations_usedfor monitoring
early_stop
Whenearly_stop is true (default), the agent can terminate the loop at any iteration by producing a “Final Answer” instead of another tool call. This is the normal completion path.
When early_stop is false, the agent always runs for exactly max_iterations cycles. This is rarely needed but can be useful when you want to ensure thorough research regardless of early confidence.
Example: Research Workflow
A workflow that researches a topic and generates a report:Example: Multi-Source Data Gathering
ReAct vs. Function Calling
| Aspect | ReAct | Function Calling |
|---|---|---|
| Tool selection | Prompt-based (text reasoning) | Native API (structured JSON) |
| Reasoning visibility | Explicit thoughts in output | Implicit (model’s internal reasoning) |
| Parallel tool calls | One at a time | Multiple per turn |
| Reliability | Depends on prompt adherence | Higher (model-native) |
| Flexibility | Can reason about tool strategy | Follows structured schemas |
| Best for | Exploratory, multi-step research | Structured API calls, data ops |
Use Function Calling when your tools have well-defined schemas and you need reliable, structured invocation. Use ReAct when you want the agent to reason explicitly about its tool-use strategy and adapt dynamically.
Performance Characteristics
| Metric | ReAct Mode |
|---|---|
| LLM calls per execution | 2-10+ (one per iteration + final answer) |
| Latency | High (multiple round-trips + tool execution time) |
| Token usage | High (accumulates context across iterations) |
| Quality ceiling | Very high for research and multi-step tasks |
Best Practices
Provide clear tool descriptions
Provide clear tool descriptions
The agent selects tools based on their descriptions. Write detailed, unambiguous descriptions that clearly explain what each tool does and when to use it.
Set conservative max_iterations
Set conservative max_iterations
Start with 3-5 iterations and increase only if needed. Each iteration adds latency and cost. Most tasks can be solved in 3-5 rounds.
Use a capable model
Use a capable model
ReAct requires the model to reason about tool use and interpret results. Use a strong model (GPT-4o, Claude Sonnet 4) for best results. Smaller models may struggle with the reasoning format.
Include a knowledge_search tool for RAG
Include a knowledge_search tool for RAG
If your workflow has knowledge bases, include
knowledge_search as a tool so the agent can retrieve information dynamically based on its reasoning.Monitor iteration counts
Monitor iteration counts
Track how many iterations your ReAct agents typically use. If they consistently hit
max_iterations, the task may be too complex or the tools may need better descriptions.