Overview
This page covers two complementary node types that add intelligence and safety to your workflow routing:- Intent Classifier Node — Uses an LLM to detect the user’s intent and route to the appropriate branch, replacing complex condition chains for natural language inputs.
- Guardrail Node — Enforces content safety policies by screening messages for harmful, off-topic, or policy-violating content before they reach downstream nodes.
Intent Classifier Node
The Intent Classifier Node analyzes the user’s message and classifies it into one of several predefined intents. Each intent maps to a separate output handle, enabling clean many-way routing without nested Condition Nodes.Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
model_id | string | "gpt-4o-mini" | LLM model used for classification |
input | string | — | The text to classify (supports variable references) |
intents | array | — | List of intent definitions (required) |
intents[].name | string | — | Intent identifier, used as the output handle name |
intents[].description | string | — | Natural language description of what this intent covers |
confidence_threshold | number | 0.7 | Minimum confidence score (0.0-1.0) to accept a classification |
fallback_intent | string | first intent | Intent to use when confidence is below the threshold |
How It Works
Classify
The user’s message and the intent descriptions are sent to the LLM. The model returns a classification with a confidence score.
Evaluate Confidence
If the confidence score meets or exceeds
confidence_threshold, the classified intent is selected.Dynamic Output Handles
Each intent in theintents array creates a separate output handle on the node. In the visual editor, you connect each handle to the downstream node that should handle that intent.
Output
The node writes classification metadata to the workflow context:Writing Good Intent Descriptions
The quality of classification depends heavily on the intent descriptions. Follow these guidelines:Be specific and comprehensive
Be specific and comprehensive
List the types of messages that should match each intent. Instead of “Billing questions”, write “Questions about invoices, payments, subscriptions, pricing, plan upgrades, refunds, or account charges.”
Make intents mutually exclusive
Make intents mutually exclusive
Avoid overlapping descriptions. If “technical support” and “general inquiry” could both match a question about product features, clarify the boundary in the descriptions.
Include example patterns
Include example patterns
Adding examples to descriptions improves accuracy: “Feature requests — suggestions for new capabilities, phrases like ‘it would be nice if’, ‘can you add’, ‘I wish the product could’.”
Use a catch-all intent
Use a catch-all intent
Always include a general/fallback intent for messages that do not clearly fit any specific category. Set it as the
fallback_intent.Guardrail Node
The Guardrail Node screens messages for content safety violations before they reach downstream AI processing. It acts as a moderation layer that can block, flag, or redirect unsafe content.Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
input | string | — | The text to screen (supports variable references) |
moderation_mode | string | "llm" | Moderation engine: "llm" (LLM-based) or "keyword" (pattern matching) |
model_id | string | "gpt-4o-mini" | LLM model for moderation (when mode is "llm") |
action_on_unsafe | string | "block" | Action when unsafe content is detected: "block", "flag", or "redirect" |
enabled_categories | string[] | all | Content categories to screen for |
custom_rules | array | [] | Additional custom moderation rules |
unsafe_response | string | — | Message to return when content is blocked |
Actions on Unsafe Content
| Action | Behavior |
|---|---|
block | Stop execution and return the unsafe_response to the user |
flag | Continue execution but set a guardrail_flagged variable to true |
redirect | Route to the unsafe output handle instead of the safe handle |
Conditional Output Handles
The Guardrail Node has two output handles:- safe — The message passed all moderation checks
- unsafe — The message was flagged by one or more categories
action_on_unsafe is "block", the unsafe handle is not reached — the node returns the unsafe_response directly. Use "redirect" when you want custom handling of unsafe content.
Built-in Categories
| Category | Description |
|---|---|
harmful_content | Instructions for dangerous or illegal activities |
hate_speech | Discriminatory, hateful, or derogatory language |
sexual_content | Sexually explicit or suggestive content |
personal_information | Requests to share or process PII (SSN, passwords, etc.) |
off_topic | Messages unrelated to the agent’s intended purpose |
self_harm | Content related to self-harm or suicide |
violence | Graphic descriptions of violence or threats |
Custom Rules
Define business-specific moderation rules alongside the built-in categories:Output
The Guardrail Node writes moderation results to the workflow context:Combining Both Nodes
A robust production workflow typically uses both nodes: a Guardrail first for safety, then an Intent Classifier for routing. This pattern ensures that unsafe content is caught before intent classification, preventing the LLM from processing harmful inputs.Intent Classifier vs. Condition Node
| Aspect | Intent Classifier | Condition Node |
|---|---|---|
| Input type | Natural language (fuzzy) | Structured data (exact) |
| Routing logic | LLM-based semantic understanding | Expression-based comparisons |
| Token cost | 1 LLM call per classification | Zero (no LLM involved) |
| Best for | User messages, ambiguous inputs | Numeric thresholds, exact matches, boolean flags |
Use the Intent Classifier when the input is natural language and the routing decision depends on semantic meaning. Use the Condition Node when you are branching on structured values like scores, status codes, or flags.
Best Practices
Use a lightweight model for classification
Use a lightweight model for classification
Intent classification does not require a large model. Models like
gpt-4o-mini perform well at a fraction of the cost and latency of full-size models.Place guardrails before expensive operations
Place guardrails before expensive operations
Screen content with the Guardrail Node before it reaches AI Agent Nodes or tool calls. This saves tokens and prevents the LLM from engaging with harmful content.
Set confidence thresholds based on your domain
Set confidence thresholds based on your domain
A threshold of 0.7 works well for most cases. Lower it (0.5-0.6) if you have many similar intents and accept more fallback routing. Raise it (0.8-0.9) for high-stakes routing decisions.
Add a prompt injection guardrail rule
Add a prompt injection guardrail rule
Always include a custom rule to detect prompt injection attempts, especially in user-facing workflows. This protects your system prompts and agent behavior.