Skip to content

Integrations

The TensorZero Gateway integrates with the major LLM providers.

Model Providers

ProviderChat FunctionsJSON FunctionsStreamingTool Use
Anthropic
AWS Bedrock
Azure OpenAI Service
Fireworks AI
GCP Vertex AI
Mistral
OpenAI (& OpenAI-Compatible)
Together AI
vLLM

Limitations

The TensorZero Gateway makes a best effort to normalize configuration across providers. For example, certain providers don’t support tool_choice: required; in these cases, TensorZero Gateway will coerce the request to tool_choice: auto under the hood.

Currently, Fireworks AI and OpenAI are the only providers that support parallel_tool_calls. Additionally, TensorZero Gateway only supports strict for OpenAI (Structured Outputs) and vLLM (Guided Decoding).

Below are the known limitations for each supported model provider.

  • Anthropic
    • The Anthropic API doesn’t support consecutive messages from the same role.
    • The Anthropic API doesn’t support tool_choice: none.
    • The Anthropic API doesn’t support seed.
  • AWS Bedrock
    • The TensorZero Gateway currently doesn’t support AWS Bedrock guardrails and traces.
    • The TensorZero Gateway uses a non-standard structure for storing ModelInference.raw_response for AWS Bedrock inference requests.
    • The AWS Bedrock API doesn’t support tool_choice: none.
    • The AWS Bedrock API doesn’t support seed.
  • Azure OpenAI Service
    • The Azure OpenAI Service API doesn’t provide usage information when streaming.
    • The Azure OpenAI Service API doesn’t support tool_choice: required.
  • Fireworks AI
    • The Fireworks API doesn’t support seed.
  • GCP Vertex AI
    • The TensorZero Gateway currently only supports the Gemini family of models.
    • The GCP Vertex AI API doesn’t support tool_choice: required for Gemini Flash models.
  • Mistral
    • The Mistral API doesn’t support seed.