Credential Management (API Key Management)
This guide explains how to manage credentials (API keys) in TensorZero Gateway.
Typically, the TensorZero Gateway will look for credentials like API keys using standard environment variables. The gateway will load credentials from the environment variables on startup, and your application doesn’t need to have access to the credentials.
That said, you can customize this behavior by setting alternative credential locations for each provider. For example, you can provide credentials dynamically at inference time, or set alternative static credentials for each provider (e.g. to use multiple API keys for the same provider).
Default Behavior
By default, the TensorZero Gateway will look for credentials in the following environment variables:
Model Provider | Default Credential |
---|---|
Anthropic | ANTHROPIC_API_KEY |
AWS Bedrock | Uses AWS SDK credentials |
Azure | AZURE_OPENAI_API_KEY |
Deepseek | DEEPSEEK_API_KEY |
Fireworks | FIREWORKS_API_KEY |
GCP Vertex AI (Anthropic) | GCP_VERTEX_CREDENTIALS_PATH |
GCP Vertex AI (Gemini) | GCP_VERTEX_CREDENTIALS_PATH |
Google AI Studio (Gemini) | GOOGLE_API_KEY |
Hyperbolic | HYPERBOLIC_API_KEY |
Mistral | MISTRAL_API_KEY |
OpenAI | OPENAI_API_KEY |
OpenAI-Compatible | OPENAI_API_KEY |
SGLang | SGLANG_API_KEY |
Text Generation Inference (TGI) | None |
Together | TOGETHER_API_KEY |
vLLM | None |
XAI | XAI_API_KEY |
Customizing Credential Management
You can customize the source of credentials for each provider.
See Configuration Reference (e.g. api_key_location
) for more information on the different ways to configure credentials for each provider.
Also see the relevant provider guides for more information on how to configure credentials for each provider.
Static Credentials
You can set alternative static credentials for each provider.
For example, let’s say we want to use a different environment variable for an OpenAI provider.
We can customize variable name by setting the api_key_location
to env::MY_OTHER_OPENAI_API_KEY
.
[models.gpt_4o_mini.providers.my_other_openai]type = "openai"api_key_location = "env::MY_OTHER_OPENAI_API_KEY"# ...
At startup, the TensorZero Gateway will look for the MY_OTHER_OPENAI_API_KEY
environment variable and use that value for the API key.
Dynamic Credentials
You can provide API keys dynamically at inference time.
To do this, you can use the dynamic::
prefix in the relevant credential field in the provider configuration.
For example, let’s say we want to provide dynamic API keys for the OpenAI provider.
[models.user_gpt_4o_mini]routing = ["openai"]
[models.user_gpt_4o_mini.providers.openai]type = "openai"model_name = "gpt-4o-mini"api_key_location = "dynamic::customer_openai_api_key"
At inference time, you can provide the API key in the credentials
argument.
from tensorzero import TensorZeroGateway
with TensorZeroGateway("http://localhost:3000") as client: response = client.inference( function_name="generate_haiku", input={ "messages": [ { "role": "user", "content": "Write a haiku about artificial intelligence.", } ] }, credentials={ "customer_openai_api_key": "sk-..." } )
print(response)