Skip to main content

Overview

The AI Agent Node is the most important node in the Nadoo AI workflow engine. It is the primary interface for invoking large language models and orchestrating complex reasoning patterns. Built on the Agent 2.0 architecture, it supports six distinct execution modes that cover everything from simple Q&A to multi-step tool use and self-reflective generation. Every AI Agent Node is configured with a model, a system prompt, and an execution mode. The mode determines how the node interacts with the LLM — whether it makes a single call, chains multiple reasoning steps, loops with tools, or evaluates parallel reasoning paths.

Agent 2.0 Execution Modes

Standard Mode

The simplest mode. The node sends the user’s message (plus conversation history and system prompt) to the LLM and returns the response directly. No loops, no tool calls, no extra reasoning steps.Best for: Simple Q&A, content generation, translation, summarization.Configuration:
{
  "agent_mode": "standard",
  "model": "gpt-4o",
  "system_prompt": "You are a helpful assistant.",
  "temperature": 0.7,
  "max_tokens": 4096
}
How it works:
  1. Assemble the prompt (system + history + user message)
  2. Call the LLM
  3. Return the response

Context Window Management

Long conversations can exceed a model’s context window. The AI Agent Node provides three strategies for handling this:
StrategyBehavior
truncateRemove the oldest messages until the conversation fits within the context window
summarizeUse the LLM to summarize older messages, replacing them with a condensed version
errorFail with an error if the context window is exceeded (useful for debugging)
{
  "context_window": {
    "strategy": "summarize",
    "max_tokens": 120000,
    "reserve_for_output": 4096
  }
}

Memory Integration

Enable conversational memory so the AI Agent retains context across multiple turns within a session.
{
  "memory": {
    "enabled": true,
    "message_window": 20,
    "include_system_prompt": true
  }
}
  • message_window — Number of recent messages to include in each LLM call
  • include_system_prompt — Whether to prepend the system prompt to every call

Model Settings

Fine-tune the LLM’s behavior with these parameters:
ParameterTypeDefaultDescription
temperaturefloat0.7Controls randomness (0 = deterministic, 2 = very random)
max_tokensint4096Maximum number of tokens in the response
top_pfloat1.0Nucleus sampling threshold
frequency_penaltyfloat0.0Penalize tokens that appear frequently (range -2.0 to 2.0)
presence_penaltyfloat0.0Penalize tokens that have appeared at all (range -2.0 to 2.0)
stop_sequencesstring[][]Sequences that cause the model to stop generating

Selecting the Right Mode

Not sure which mode to use? Start here: For a detailed comparison and guidance, see the AI Agent Strategies page.

Next Steps