Simple Setup
You can use the short-handdeepseek::model_name
to use a DeepSeek model with TensorZero, unless you need advanced features like fallbacks or custom credentials.
You can use DeepSeek models in your TensorZero variants by setting the model
field to deepseek::model_name
.
For example:
model_name
in the inference request to use a specific DeepSeek model, without having to configure a function and variant in TensorZero.
Advanced Setup
In more complex scenarios (e.g. fallbacks, custom credentials), you can configure your own model and DeepSeek provider in TensorZero. For this minimal setup, you’ll need just two files in your project directory:You can also find the complete code for this example on GitHub.
Configuration
Create a minimal configuration file that defines a model and a simple chat function:config/tensorzero.toml
deepseek-chat
(DeepSeek-v3
) and deepseek-reasoner
(R1
).
DeepSeek only supports JSON mode for deepseek-chat
and neither model supports tool use yet.
We include thought
content blocks in the response and data model for reasoning models like deepseek-reasoner
.
Credentials
You must set theDEEPSEEK_API_KEY
environment variable before running the gateway.
You can customize the credential location by setting the api_key_location
to env::YOUR_ENVIRONMENT_VARIABLE
or dynamic::ARGUMENT_NAME
.
See the Credential Management guide and Configuration Reference for more information.
Deployment (Docker Compose)
Create a minimal Docker Compose configuration:docker-compose.yml
docker compose up
.