Azure
Instructions for using Azure OpenAI models
To use a language model hosted on Azure OpenAI, specify the azure path in the from field and the following parameters from the Azure OpenAI Model Deployment page:
azure_api_key
The Azure OpenAI API key from the models deployment page.
-
azure_api_version
The API version used for the Azure OpenAI service.
-
azure_deployment_name
The name of the model deployment.
Model name
endpoint
The Azure OpenAI resource endpoint, e.g., https://resource-name.openai.azure.com.
-
azure_entra_token
The Azure Entra token for authentication.
-
responses_api
enabled or disabled. Whether to enable invoking this model from the /v1/responses HTTP endpoint
disabled
azure_openai_responses_tools
Comma-separated list of OpenAI-hosted tools exposed via the Responses API for this model. These hosted tools are not available from the /v1/chat/completions HTTP endpoint. Supported tools: code_interpreter, web_search.
-
Only one of azure_api_key or azure_entra_token can be provided for model configuration.
Example:
models:
- from: azure:gpt-4o-mini
name: gpt-4o-mini
params:
endpoint: ${ secrets:SPICE_AZURE_AI_ENDPOINT }
azure_api_version: 2024-08-01-preview
azure_deployment_name: gpt-4o-mini
azure_api_key: ${ secrets:SPICE_AZURE_API_KEY }
# Responses API configuration
responses_api: enabled
azure_openai_responses_tools: web_searchRefer to the Azure OpenAI Service models for more details on available models and configurations.
Follow the Azure OpenAI Models Cookbook to try Azure OpenAI models for vector-based search and chat functionalities with structured (taxi trips) and unstructured (GitHub files) data.
Last updated
Was this helpful?