Introduction
The Nadoo AI Workflow Engine is a LangGraph-based visual workflow system that lets you build AI agent pipelines as directed graphs. Each workflow is a graph where nodes process inputs, retrieve information, make decisions, and generate responses, connected by edges that define the execution flow. Whether you are building a simple chatbot or a multi-step RAG pipeline with tool use, the workflow engine provides the runtime, streaming, and observability you need to go from prototype to production.Node Categories
Nadoo AI ships with 18+ built-in node types organized into five categories.Input / Output
Nodes that receive user input and deliver final responses.
- Start Node — Entry point for every workflow
- End Node — Terminal node that finalizes output
- Question Node — Prompt the user for additional information
- Form Node — Collect structured data via form fields
- Direct Reply Node — Return an immediate static or templated response
AI / LLM
Nodes that invoke language models and multimodal AI.
- AI Agent Node — Core LLM interaction with 6 execution modes
- Image Generate — Create images from text prompts
- Image Understand — Analyze and describe images
- TTS (Text-to-Speech) — Convert text to audio
- STT (Speech-to-Text) — Transcribe audio to text
Knowledge & Retrieval
Nodes for RAG, search, and data access.
- Search Knowledge Node — Vector / hybrid search over knowledge bases
- Reranker Node — Re-score and reorder retrieved documents
- Document Extract Node — Parse and extract content from files
- Database Node — Execute SQL queries against relational databases
- Database Semantic RAG Node — Natural-language-to-SQL with retrieval
Logic & Control
Nodes for branching, variables, and sub-workflows.
- Condition Node — If/else branching based on expressions
- Variable Node — Set, transform, or compute variables
- Application Node — Invoke another Nadoo application as a sub-workflow
Integration
Nodes for external tools, code, and MCP servers.
- Tool Node — Call a registered tool by name
- Tool Lib Node — Access shared tool libraries
- Python Node — Execute arbitrary Python code
- MCP Node — Connect to Model Context Protocol servers
Execution Model
Every workflow execution is governed by two layers of context:| Context | Scope | Purpose |
|---|---|---|
| WorkflowContext | Global (entire run) | Holds the conversation history, global variables, streaming channel, and execution metadata |
| NodeContext | Local (single node) | Contains node-specific inputs, configuration, and intermediate results |
WorkflowContext. Each node reads its inputs from the context, performs its operation, and writes its outputs back for downstream nodes to consume.
Node Lifecycle
Every node passes through a consistent lifecycle during execution:Pre-execute
Validate inputs, resolve variable references, and prepare the node’s runtime configuration.
Node Status States
During execution, each node transitions through the following statuses:| Status | Description |
|---|---|
PENDING | Node is queued and waiting to execute |
RUNNING | Node is currently executing |
SUCCESS | Node completed successfully |
FAILED | Node encountered an error |
SKIPPED | Node was bypassed (e.g., condition evaluated to false) |
INTERRUPTED | Node was stopped by user intervention or timeout |
SSE Streaming
The workflow engine streams execution events to the client in real time via Server-Sent Events (SSE). This enables live progress indicators, token-by-token LLM output, and detailed debugging. There are 19 event types covering the full execution lifecycle:- Workflow Events
- Node Events
- LLM Events
- Data Events
workflow_started— Execution beginsworkflow_finished— Execution completed successfullyworkflow_failed— Execution terminated with an errorworkflow_interrupted— Execution stopped by user
Example: Minimal Workflow
A simple question-answering workflow requires only three nodes:- Start Node receives the user’s message.
- AI Agent Node sends the message to an LLM and streams the response.
- End Node delivers the final answer.