Skip to main content

Overview

OpenRouter provides access to 200+ AI models from multiple providers through a single, unified OpenAI-compatible API. This makes it an excellent choice for teams that want to experiment with different models without managing multiple provider credentials.
OpenRouter uses the same API format as OpenAI, so switching between OpenAI and OpenRouter models requires minimal configuration changes.

Supported Capabilities

CapabilitySupported
LLM (Chat Completion)Yes
EmbeddingNo
Image GenerationNo
RerankingNo

Configuration

1

Get an API Key

Sign up at openrouter.ai and generate an API key from the dashboard.
2

Configure in Nadoo AI

Navigate to Workspace Settings > AI Model Providers and add OpenRouter:
FieldValue
ProviderOpenRouter
API Keysk-or-... (your OpenRouter API key)
Site URL(optional) Your site URL for request attribution
Site Name(optional) Your site name for OpenRouter dashboard
3

Select Models

Once configured, OpenRouter models become available in the AI Agent node’s model selector. Popular models include:
  • openai/gpt-4o
  • anthropic/claude-3.5-sonnet
  • google/gemini-pro-1.5
  • meta-llama/llama-3.1-405b-instruct
  • mistralai/mistral-large

Environment Variables

If configuring via environment variables:
OPENROUTER_API_KEY=sk-or-your-api-key
OPENROUTER_SITE_URL=https://your-site.com      # optional
OPENROUTER_SITE_NAME=YourAppName                # optional

How It Works

The OpenRouter provider uses the OpenAI-compatible API format, connecting to https://openrouter.ai/api/v1:
from openai import AsyncOpenAI

client = AsyncOpenAI(
    api_key="sk-or-your-api-key",
    base_url="https://openrouter.ai/api/v1",
    default_headers={
        "HTTP-Referer": "https://your-site.com",
        "X-Title": "YourAppName"
    }
)
This means all standard OpenAI chat completion parameters are supported: temperature, max_tokens, top_p, stop, stream, tools, and more.

Model Selection

OpenRouter provides access to models across many providers. The model ID format is provider/model-name:
OpenRouter’s model comparison page shows real-time pricing, latency, and availability for all models.

Request Attribution

The HTTP-Referer and X-Title headers are sent with every request to OpenRouter. These are used for:
  • Attribution in the OpenRouter dashboard (so you can track usage by site)
  • Ranking on the OpenRouter leaderboard
  • Some model providers offer discounts for attributed requests

Use Cases

Model Comparison

Test the same workflow against GPT-4o, Claude, and Gemini to find the best model for your use case — without configuring three separate providers.

Fallback Routing

OpenRouter automatically falls back to alternative providers if the primary is unavailable, improving reliability for production deployments.

Next Steps