Getting Started with Google AI Studio (Gemini API)
This guide shows how to set up a minimal deployment to use the TensorZero Gateway with Google AI Studio (Gemini API).
Simple Setup
You can use the short-hand google_ai_studio_gemini::model_name
to use a Google AI Studio (Gemini API) model with TensorZero, unless you need advanced features like fallbacks or custom credentials.
You can use Google AI Studio (Gemini API) models in your TensorZero variants by setting the model
field to google_ai_studio_gemini::model_name
.
For example:
[functions.my_function_name.variants.my_variant_name]type = "chat_completion"model = "google_ai_studio_gemini::gemini-1.5-flash-8b"
Additionally, you can set model_name
in the inference request to use a specific Google AI Studio (Gemini API) model, without having to configure a function and variant in TensorZero.
curl -X POST http://localhost:3000/inference \ -H "Content-Type: application/json" \ -d '{ "model_name": "google_ai_studio_gemini::gemini-1.5-flash-8b", "input": { "messages": [ { "role": "user", "content": "What is the capital of Japan?" } ] } }'
Advanced Setup
In more complex scenarios (e.g. fallbacks, custom credentials), you can configure your own model and Google AI Studio (Gemini API) provider in TensorZero.
For this minimal setup, you’ll need just two files in your project directory:
Directoryconfig/
- tensorzero.toml
- docker-compose.yml
For production deployments, see our Deployment Guide.
Configuration
Create a minimal configuration file that defines a model and a simple chat function:
[models.gemini_1_5_flash_8b]routing = ["google_ai_studio_gemini"]
[models.gemini_1_5_flash_8b.providers.google_ai_studio_gemini]type = "google_ai_studio_gemini"model_name = "gemini-1.5-flash-8b"
[functions.my_function_name]type = "chat"
[functions.my_function_name.variants.my_variant_name]type = "chat_completion"model = "gemini_1_5_flash_8b"
See the list of models available on Google AI Studio (Gemini API).
Credentials
You must set the GOOGLE_AI_STUDIO_API_KEY
environment variable before running the gateway.
You can customize the credential location by setting the api_key_location
to env::YOUR_ENVIRONMENT_VARIABLE
or dynamic::ARGUMENT_NAME
.
See the Credential Management guide and Configuration Reference for more information.
Deployment (Docker Compose)
Create a minimal Docker Compose configuration:
# This is a simplified example for learning purposes. Do not use this in production.# For production-ready deployments, see: https://www.tensorzero.com/docs/gateway/deployment
services: gateway: image: tensorzero/gateway volumes: - ./config:/app/config:ro environment: - GOOGLE_AI_STUDIO_API_KEY=${GOOGLE_AI_STUDIO_API_KEY:?Environment variable GOOGLE_AI_STUDIO_API_KEY must be set.} ports: - "3000:3000" extra_hosts: - "host.docker.internal:host-gateway"
You can start the gateway with docker compose up
.
Inference
Make an inference request to the gateway:
curl -X POST http://localhost:3000/inference \ -H "Content-Type: application/json" \ -d '{ "function_name": "my_function_name", "input": { "messages": [ { "role": "user", "content": "What is the capital of Japan?" } ] } }'