Overview
The LLM API allows plugins to invoke AI models (GPT-4, Claude, Gemini, etc.) configured in your Nadoo workspace. Requires permission:llm_access
Basic Usage
invoke()
Invoke an LLM model:Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
messages | List[Dict] | Yes | - | Chat messages |
model_uuid | str | No | None | Specific model UUID (None = app default) |
temperature | float | No | 0.7 | Sampling temperature (0-2) |
max_tokens | int | No | None | Max tokens to generate |
top_p | float | No | None | Nucleus sampling (0-1) |
stop | List[str] | No | None | Stop sequences |
Message Format
LLMResponse
Usage Object
Examples
Simple Question Answering
With System Message
Multi-turn Conversation
Structured Output
With Temperature Control
Token-limited Responses
Error Handling
Best Practices
Use Appropriate Temperature
Use Appropriate Temperature
- 0.0-0.3: Factual, deterministic tasks
- 0.5-0.8: Balanced responses
- 0.9-1.5: Creative writing
Limit Token Usage
Limit Token Usage
Set
max_tokens to prevent excessive costs and long responsesClear System Messages
Clear System Messages
Provide clear instructions in system messages for better results
Handle Errors Gracefully
Handle Errors Gracefully
Always catch
LLMInvocationError and return user-friendly error messagesLog Token Usage
Log Token Usage
Track token usage via
response.usage for monitoring costs