Overview
The LLM API allows plugins to invoke AI models configured in the Nadoo workspace. Plugins can call LLMs without managing API keys or configurations. Permission Required:llm_access
Classes
LLMResponse
Response object from LLM invocation.Attributes
| Attribute | Type | Description |
|---|---|---|
content | str | Generated text response |
model_uuid | str | Model UUID |
model_name | str | Model name (e.g., “gpt-4”) |
model_id | str | Model identifier |
provider | str | Provider (e.g., “openai”, “anthropic”) |
usage | dict[str, int] | Token usage: prompt_tokens, completion_tokens, total_tokens |
finish_reason | str | None | Why generation stopped (“stop”, “length”, etc.) |
tool_calls | list[dict] | None | Tool/function calls (if applicable) |
Methods
invoke
Invoke an LLM model with messages.Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
messages | list[dict] | Required | List of message dicts with role and content |
model_uuid | str | None | None | Model UUID (None = workspace default) |
temperature | float | 0.7 | Sampling temperature (0.0-2.0) |
max_tokens | int | None | None | Maximum tokens to generate |
top_p | float | None | None | Nucleus sampling parameter (0.0-1.0) |
stop | list[str] | None | None | Stop sequences |
Message Format
system- System instructionsuser- User messagesassistant- AI responses (for conversation history)
Returns
LLMResponse object with generated content and metadata.
Raises
PluginPermissionError- Ifllm_accesspermission not grantedLLMInvocationError- If invocation fails
Usage Examples
Basic Invocation
With Conversation History
Custom Temperature
With Max Tokens
Using Specific Model
With Stop Sequences
Token Usage Tracking
Best Practices
Use System Messages
Use System Messages
Always include a system message for better results:
Set Appropriate Temperature
Set Appropriate Temperature
- Creative tasks: 0.8-1.5
- Balanced: 0.7 (default)
- Factual/deterministic: 0.0-0.3
Limit Max Tokens
Limit Max Tokens
Prevent excessive token usage:
Handle Errors
Handle Errors
Wrap in try/except:
Track Token Usage
Track Token Usage
Monitor costs: