Skip to content

Credential Management (API Key Management)

This guide explains how to manage credentials (API keys) in TensorZero Gateway.

Typically, the TensorZero Gateway will look for credentials like API keys using standard environment variables. The gateway will load credentials from the environment variables on startup, and your application doesn’t need to have access to the credentials.

That said, you can customize this behavior by setting alternative credential locations for each provider. For example, you can provide credentials dynamically at inference time, or set alternative static credentials for each provider (e.g. to use multiple API keys for the same provider).

Default Behavior

By default, the TensorZero Gateway will look for credentials in the following environment variables:

Model ProviderDefault Credential
AnthropicANTHROPIC_API_KEY
AWS BedrockUses AWS SDK credentials
AzureAZURE_OPENAI_API_KEY
DeepseekDEEPSEEK_API_KEY
FireworksFIREWORKS_API_KEY
GCP Vertex AI (Anthropic)GCP_VERTEX_CREDENTIALS_PATH
GCP Vertex AI (Gemini)GCP_VERTEX_CREDENTIALS_PATH
Google AI Studio (Gemini)GOOGLE_API_KEY
HyperbolicHYPERBOLIC_API_KEY
MistralMISTRAL_API_KEY
OpenAIOPENAI_API_KEY
OpenAI-CompatibleOPENAI_API_KEY
SGLangSGLANG_API_KEY
Text Generation Inference (TGI)None
TogetherTOGETHER_API_KEY
vLLMNone
XAIXAI_API_KEY

Customizing Credential Management

You can customize the source of credentials for each provider.

See Configuration Reference (e.g. api_key_location) for more information on the different ways to configure credentials for each provider. Also see the relevant provider guides for more information on how to configure credentials for each provider.

Static Credentials

You can set alternative static credentials for each provider.

For example, let’s say we want to use a different environment variable for an OpenAI provider. We can customize variable name by setting the api_key_location to env::MY_OTHER_OPENAI_API_KEY.

[models.gpt_4o_mini.providers.my_other_openai]
type = "openai"
api_key_location = "env::MY_OTHER_OPENAI_API_KEY"
# ...

At startup, the TensorZero Gateway will look for the MY_OTHER_OPENAI_API_KEY environment variable and use that value for the API key.

Dynamic Credentials

You can provide API keys dynamically at inference time.

To do this, you can use the dynamic:: prefix in the relevant credential field in the provider configuration.

For example, let’s say we want to provide dynamic API keys for the OpenAI provider.

[models.user_gpt_4o_mini]
routing = ["openai"]
[models.user_gpt_4o_mini.providers.openai]
type = "openai"
model_name = "gpt-4o-mini"
api_key_location = "dynamic::customer_openai_api_key"

At inference time, you can provide the API key in the credentials argument.

from tensorzero import TensorZeroGateway
with TensorZeroGateway("http://localhost:3000") as client:
response = client.inference(
function_name="generate_haiku",
input={
"messages": [
{
"role": "user",
"content": "Write a haiku about artificial intelligence.",
}
]
},
credentials={
"customer_openai_api_key": "sk-..."
}
)
print(response)