Overview
The Native backend is Flow Core’s default execution engine, providing high-performance workflow execution with full feature support.Features
Core Capabilities
- Full Async Support: Built on Python’s asyncio for concurrent execution
- Type Safety: Pydantic v2 integration for runtime validation
- Minimal Dependencies: Only Pydantic and typing-extensions required
- Streaming: Real-time event streaming and token-by-token output
- Memory Management: Built-in conversation and entity memory
- Resilience: Retry, fallback, and circuit breaker patterns
- Observability: Comprehensive callbacks and metrics
Installation
The native backend is included with Flow Core:Basic Usage
Configuration
Execution Configuration
Memory Configuration
Execution Strategies
Sequential Execution
Parallel Execution
Conditional Execution
Performance Optimization
Caching
Connection Pooling
Async Optimization
Native Backend Architecture
Component Overview
Execution Flow
- Workflow Parsing: Convert DSL to execution graph
- Context Initialization: Create workflow and node contexts
- Node Execution: Execute nodes based on strategy
- State Management: Update contexts and memory
- Result Aggregation: Combine outputs and metrics
Advanced Features
Checkpointing
Custom Node Types
Execution Hooks
Monitoring and Debugging
Execution Tracing
Metrics Collection
Debug Mode
Error Handling
Error Recovery
Error Reporting
Performance Benchmarks
Execution Speed
| Workflow Size | Sequential (ms) | Parallel (ms) | Speedup |
|---|---|---|---|
| 10 nodes | 100 | 30 | 3.3x |
| 50 nodes | 500 | 120 | 4.2x |
| 100 nodes | 1000 | 200 | 5.0x |
| 500 nodes | 5000 | 800 | 6.3x |
Memory Usage
| Feature | Memory Overhead |
|---|---|
| Base executor | ~10MB |
| Per node | ~100KB |
| With caching | +20MB |
| With tracing | +5MB |
| With metrics | +2MB |
Comparison with Other Backends
| Feature | Native | LangGraph | CrewAI |
|---|---|---|---|
| Performance | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ |
| Memory Usage | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ |
| Feature Set | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ |
| Ecosystem | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Learning Curve | ⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ |
Best Practices
Use Async Properly
Use Async Properly
Always use
await with async operations. Don’t block the event loop.Configure Timeouts
Configure Timeouts
Set appropriate timeouts to prevent hanging workflows.
Enable Monitoring
Enable Monitoring
Always enable metrics and tracing in production.
Handle Errors Gracefully
Handle Errors Gracefully
Implement proper error handling and recovery strategies.
Optimize for Your Use Case
Optimize for Your Use Case
Tune configuration based on your specific workflow patterns.
Troubleshooting
Common Issues
High Memory Usage- Enable memory limits
- Use streaming for large data
- Clear contexts after execution
- Enable parallel execution
- Use caching for repeated operations
- Optimize node implementations
- Configure connection pools
- Implement retry logic
- Use circuit breakers