Skip to main content

Provider-Based Model System

Nadoo AI uses a capability-based provider system that lets you connect to 12+ AI model providers through a unified interface. Each provider exposes one or more capabilities — such as LLM chat, embeddings, speech-to-text, or image generation — and your applications automatically route requests to the right model based on what they need.
You can configure multiple providers simultaneously. Nadoo AI will use the provider assigned to each application, giving you full control over cost, latency, and capability tradeoffs.

Supported Providers

ProviderModelsCapabilities
OpenAIGPT-4o, GPT-4, GPT-3.5-turboLLM, Embedding, TTS, TTI, Vision
AnthropicClaude 3.5 Sonnet, Claude 3 Opus, Claude 3 HaikuLLM
Azure OpenAIGPT-4, GPT-3.5 deploymentsLLM, Embedding
AWS BedrockClaude, Llama, TitanLLM, Embedding
Google GeminiGemini Pro, Gemini FlashLLM, Vision
Google Vertex AIPaLM, GeminiLLM, Embedding
OllamaLlama, Mistral, CodeLlamaLLM, Embedding
vLLMAny open-source modelLLM
OpenRouter100+ modelsLLM
OpenAI-CompatibleAny OpenAI-compatible APILLM

Provider Capabilities

Each provider can support one or more of the following capabilities:
CapabilityDescription
LLMChat completion and text generation with streaming support
EmbeddingConvert text into vector representations for semantic search
STTSpeech-to-text transcription from audio input
TTSText-to-speech audio generation
TTIText-to-image generation (e.g., DALL-E)
Image UnderstandingVision capabilities for analyzing images
RerankingRe-score and reorder search results for relevance

How to Configure a Provider

1

Open Admin Settings

Navigate to Admin in the left sidebar of your workspace.
2

Go to Model Providers

Select Model Providers from the admin menu.
3

Add Your Provider

Click the provider you want to configure and enter your API key. Some providers require additional fields such as an endpoint URL or deployment name.
4

Test the Connection

Click Test to verify the API key is valid and the provider is reachable.
5

Select Models

Once connected, enable the specific models you want to make available in your workspace.

Choosing the Right Provider

OpenAI GPT-4o or Anthropic Claude 3.5 Sonnet — both deliver strong performance across a wide range of tasks. GPT-4o offers lower latency; Claude 3.5 Sonnet excels at nuanced reasoning.
GPT-4o-mini, Claude 3 Haiku, or Ollama (self-hosted). These models offer excellent quality-to-cost ratios for high-volume applications.
Ollama or vLLM — run open-source models entirely on your own infrastructure with no data leaving your network.
OpenAI text-embedding-3-small/large or Azure OpenAI embeddings. For self-hosted options, Ollama supports embedding models as well.
Provider availability may vary depending on your deployment type. Self-hosted deployments can connect to any provider with network access. Cloud-hosted deployments on Nadoo Cloud come with OpenAI and Anthropic pre-configured.