Skip to main content

Overview

Google Gemini is Google’s family of multimodal AI models that can process text, images, audio, and video. Nadoo AI supports Gemini through two access paths: Google AI Studio (API key-based, for experimentation and smaller workloads) and Google Vertex AI (service account-based, for production and enterprise use). Key strengths:
  • Multimodal natively — Process text, images, audio, and video in a single model call
  • Large context windows — Gemini Pro supports up to 1M tokens for massive document processing
  • Two access paths — Simple API key setup (AI Studio) or enterprise-grade deployment (Vertex AI)
  • Embedding support — Dedicated embedding models for knowledge base and RAG

Access Paths

Best for: Development, experimentation, and smaller production workloads.Google AI Studio provides a simple API key-based access to Gemini models. No Google Cloud project setup is required.
FeatureDetail
AuthenticationAPI Key
PricingFree tier available; pay-as-you-go
Data residencyNo regional control
SLANo enterprise SLA

Setup — Google AI Studio

1

Get an API Key

Go to aistudio.google.com and click Get API Key. Create a new key or select an existing one.
2

Configure in Nadoo

Go to Admin > Model Providers > Google Gemini and enter:
FieldRequiredDescription
API KeyYesYour Google AI Studio API key
Access PathYesSelect AI Studio
3

Test Connection

Click Test to verify the key and enable available models.

Setup — Google Vertex AI

1

Create a Google Cloud Project

Set up a project in the Google Cloud Console and enable the Vertex AI API.
2

Create a Service Account

Create a service account with the Vertex AI User role (roles/aiplatform.user). Download the JSON key file.
3

Configure in Nadoo

Go to Admin > Model Providers > Google Gemini and enter:
FieldRequiredDescription
Access PathYesSelect Vertex AI
Project IDYesYour Google Cloud project ID
RegionYesThe Google Cloud region (e.g., us-central1)
Service Account JSONYesUpload or paste the service account key JSON
4

Test Connection

Click Test to verify the credentials and enable available models.

Available Models

Chat / LLM

ModelContext WindowBest For
gemini-2.0-flash1M tokensFast, efficient multimodal tasks
gemini-2.0-pro1M tokensComplex reasoning with massive context
gemini-1.5-pro1M tokensLong document analysis, multimodal processing
gemini-1.5-flash1M tokensCost-efficient, fast responses

Embedding

ModelDimensionsBest For
text-embedding-004768General-purpose embeddings for RAG
text-multilingual-embedding-002768Multilingual embeddings

Capabilities

Chat Completion

Conversational AI with streaming and function calling support.

Vision

Analyze images alongside text — charts, documents, screenshots, and photos.

Embeddings

Generate vector embeddings for semantic search and knowledge base indexing.

Long Context

Process up to 1 million tokens — entire codebases, books, or document collections in one request.

Multimodal

Process text, images, audio, and video in a single model call.

Function Calling

Invoke tools with structured arguments for agentic workflows.

When to Use Google Gemini

Gemini’s 1M token context window is the largest available. Use it when you need to process entire books, large codebases, or extensive document collections without chunking.
When your application needs to analyze images, process audio, or understand video alongside text, Gemini’s native multimodal support avoids the need for separate processing pipelines.
If your infrastructure is on Google Cloud, Vertex AI integrates with IAM, VPC, Cloud Logging, and other GCP services for a seamless enterprise experience.
Google AI Studio offers a generous free tier for experimentation and development, making it easy to prototype before committing to production costs.
Use CaseRecommended ModelReason
General chatbotgemini-2.0-flashFast, multimodal, cost-efficient
Complex analysisgemini-2.0-proHighest reasoning capability
Long document Q&Agemini-1.5-pro1M token context for massive documents
High-volume workloadsgemini-1.5-flashFastest Gemini model
Knowledge base searchtext-embedding-004Good quality embeddings
Multilingual RAGtext-multilingual-embedding-002Cross-language embedding support

Environment Variables

When self-hosting, configure Google Gemini via environment variables:

AI Studio

GOOGLE_API_KEY=your-api-key-here

Vertex AI

GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
GOOGLE_CLOUD_PROJECT=your-project-id
GOOGLE_CLOUD_REGION=us-central1
If both environment variables and the admin UI configuration are set, the admin UI values take precedence.

Rate Limits and Pricing

AI Studio

Google AI Studio applies rate limits per API key with a free tier and pay-as-you-go pricing above the free quota.

Vertex AI

Vertex AI uses per-project quotas that can be adjusted in the Google Cloud Console. Pricing is per 1,000 characters for input and output.
Monitor your usage in the Google Cloud Console to avoid unexpected charges. Set budget alerts for production workloads, especially when processing large documents with the 1M token context window.

Troubleshooting

AI Studio: Your API key is invalid or revoked. Generate a new key at aistudio.google.com.Vertex AI: Your service account lacks the Vertex AI User role. Check IAM permissions in the Google Cloud Console.
You have exceeded rate limits. Wait and retry, or request a quota increase in the Google Cloud Console (Vertex AI only).
The model may not be available in your selected region (Vertex AI) or may not be enabled for your API key (AI Studio). Check model availability documentation.