Python
TensorZero Client
The TensorZero client offers the most flexibility. It can be used with a built-in embedded (in-memory) gateway or a standalone HTTP gateway. Additionally, it can be used synchronously or asynchronously. You can install the TensorZero Python client withpip install tensorzero
.
Embedded Gateway
The TensorZero Client includes a built-in embedded (in-memory) gateway, so you don’t need to run a separate service.Synchronous
Asynchronous
You can avoid the
await
in build_embedded
by setting async_setup=False
.This is useful for synchronous contexts like __init__
functions where await
cannot be used.
However, avoid using it in asynchronous contexts as it blocks the event loop.
For async contexts, use the default async_setup=True
with await.For example, it’s safe to use async_setup=False
when initializing a FastAPI server, but not while the server is actively handling requests.Standalone HTTP Gateway
The TensorZero Client can optionally be used with a standalone HTTP Gateway instead.Synchronous
Asynchronous
You can avoid the
await
in build_http
by setting async_setup=False
.
See above for more details.OpenAI Python Client
You can use the OpenAI Python client to run inference requests with TensorZero. You need to use the TensorZero Client for feedback requests.Embedded Gateway
You can run an embedded (in-memory) TensorZero Gateway with the OpenAI Python client, which doesn’t require a separate service.You can avoid the
await
in patch_openai_client
by setting async_setup=False
.
See above for more details.Standalone HTTP Gateway
You can deploy the TensorZero Gateway as a separate service and configure the OpenAI client to talk to it. See Deployment for instructions on how to deploy the TensorZero Gateway.Usage Details
model
In the OpenAI client, the model
parameter should be one of the following:
tensorzero::function_name::<your_function_name>
For example, if you have a function namedgenerate_haiku
, you can usetensorzero::function_name::generate_haiku
.
tensorzero::model_name::<your_model_name>
For example, if you have a model namedmy_model
in the config file, you can usetensorzero::model_name::my_model
. Alternatively, you can use default models liketensorzero::model_name::openai::gpt-4o-mini
.
TensorZero Parameters
You can include optional TensorZero parameters (e.g.episode_id
and variant_name
) by prefixing them with tensorzero::
in the extra_body
field in OpenAI client requests.
JavaScript / TypeScript / Node
OpenAI Node Client
You can use the OpenAI client to run inference requests with TensorZero. You can deploy the TensorZero Gateway as a separate service and configure the OpenAI client to talk to the TensorZero Gateway. See Deployment for instructions on how to deploy the TensorZero Gateway.See OpenAI Python Client » Usage Details above for instructions on how to use the
model
parameter and other technical details.episode_id
and variant_name
) by prefixing them with tensorzero::
in the body in OpenAI client requests.
Other Languages and Platforms
The TensorZero Gateway exposes every feature via its HTTP API. You can deploy the TensorZero Gateway as a standalone service and interact with it from any programming language by making HTTP requests. See Deployment for instructions on how to deploy the TensorZero Gateway.TensorZero HTTP API
OpenAI HTTP API
You can make OpenAI-compatible requests to the TensorZero Gateway.See OpenAI Python Client » Usage Details above for instructions on how to use the
model
parameter and other technical details.episode_id
and variant_name
) by prefixing them with tensorzero::
in the body in OpenAI client requests.