Fix: Model Not Found and Deployment Errors in Azure OpenAI .NET SDK

From GitHub Issue .NET 9 Azure.AI.OpenAI 2.1.0
By Rajesh Mishra · Feb 28, 2026 · Verified: Feb 28, 2026 · 6 min read

The Errors

Several related error messages point to deployment and model configuration problems:

Error 1: Model does not exist

Azure.RequestFailedException: HTTP 404 (Not Found)

{
  "error": {
    "message": "The model 'gpt-4o' does not exist or there is no deployment for it.",
    "type": "invalid_request_error",
    "code": "model_not_found"
  }
}

Error 2: DeploymentNotFound

Azure.RequestFailedException: HTTP 404 (Not Found)

{
  "error": {
    "code": "DeploymentNotFound",
    "message": "The API deployment for this resource does not exist.
                If you created the deployment within the last 5 minutes, please wait a moment and try again."
  }
}

Error 3: Endpoint resolution failure

System.UriFormatException: Invalid URI: The hostname could not be parsed.

All three errors prevent your application from reaching the model. The request is well-formed but aimed at the wrong target.

Root Causes

1. Deployment Name vs. Model Name Confusion

This is by far the most common cause. When you deploy a model in Azure OpenAI, you give it a deployment name — a custom label like my-gpt4o-deployment. The model name is the underlying model identifier like gpt-4o.

The Azure OpenAI SDK expects the deployment name. If you pass the model name, the service cannot find a matching deployment and returns 404.

This confusion is made worse because the standard OpenAI SDK (non-Azure) uses model names. Developers switching from OpenAI to Azure OpenAI carry this assumption with them.

2. Wrong Azure Region in Endpoint

Each Azure OpenAI resource is region-specific. If you created your resource in East US but your endpoint says https://my-resource.swedencentral.api.cognitive.microsoft.com/, the resource will not be found. The correct endpoint format for Azure OpenAI is:

https://{resource-name}.openai.azure.com/

Some older documentation or portal references may show the Cognitive Services endpoint format. Ensure you are using the OpenAI-specific URL.

3. API Version Mismatch

Azure OpenAI’s REST API is versioned. Certain models and features are only available in newer API versions. If you specify an older version that does not support the model you deployed, the request may fail.

4. Deployment Not Yet Provisioned

After creating a deployment, it takes a short time to provision (usually 1-5 minutes). Requests during this window return a 404 with the “created within the last 5 minutes” hint.

Diagnosing the Problem

Step 1: List your deployments. This is the fastest way to confirm what exists and what state it is in:

az cognitiveservices account deployment list \
  --name your-openai-resource \
  --resource-group your-rg \
  --output table

This outputs a table like:

Name                  Model           ModelVersion  ProvisioningState
--------------------  --------------  ------------  -----------------
my-gpt4o-deployment   gpt-4o          2024-11-20    Succeeded
my-embedding-model    text-embedding  3             Succeeded

If your deployment is not in this list, it does not exist on this resource. If ProvisioningState is not Succeeded, it is still being set up.

Step 2: Verify endpoint. In the Azure portal, open your Azure OpenAI resource and look at the Keys and Endpoint blade. The endpoint should match what your application uses.

Step 3: Check the API version. Look at the api-version query parameter in your request. The Azure OpenAI API reference lists supported versions.

Fix 1: Use the Correct Deployment Name

The most important fix. Always pass the deployment name to GetChatClient, GetEmbeddingClient, and similar methods:

using Azure;
using Azure.AI.OpenAI;
using OpenAI.Chat;

var client = new AzureOpenAIClient(
    new Uri("https://my-resource.openai.azure.com/"),
    new AzureKeyCredential(apiKey));

// WRONG — this is the model name
// ChatClient chatClient = client.GetChatClient("gpt-4o");

// CORRECT — this is the deployment name you created
ChatClient chatClient = client.GetChatClient("my-gpt4o-deployment");

ChatCompletion completion = await chatClient.CompleteChatAsync("Hello!");

Store deployment names in configuration so they are easy to change without recompiling:

{
  "AzureOpenAI": {
    "Endpoint": "https://my-resource.openai.azure.com/",
    "DeploymentName": "my-gpt4o-deployment",
    "EmbeddingDeployment": "my-embedding-model"
  }
}
var deploymentName = config["AzureOpenAI:DeploymentName"]!;
ChatClient chatClient = client.GetChatClient(deploymentName);

Fix 2: Correct the Endpoint URL

Make sure the endpoint uses the right format and matches your resource:

// WRONG — Cognitive Services format (outdated for OpenAI)
var endpoint = new Uri("https://my-resource.cognitiveservices.azure.com/");

// WRONG — missing https
var endpoint = new Uri("my-resource.openai.azure.com/");

// WRONG — different resource name
var endpoint = new Uri("https://some-other-resource.openai.azure.com/");

// CORRECT
var endpoint = new Uri("https://my-resource.openai.azure.com/");

The resource name in the URL must match the Azure OpenAI resource that owns the deployment. You can confirm this in the portal.

Fix 3: Specify a Compatible API Version

If you need to override the default API version (the SDK usually sets a reasonable default), make sure it supports your model:

var options = new AzureOpenAIClientOptions(
    AzureOpenAIClientOptions.ServiceVersion.V2024_10_21);

var client = new AzureOpenAIClient(
    new Uri(config["AzureOpenAI:Endpoint"]!),
    new AzureKeyCredential(apiKey),
    options);

Check the API reference for which versions support which models. When in doubt, use the latest stable version.

Fix 4: Wait for Deployment Provisioning

If you just created the deployment, wait for provisioning to complete. You can check programmatically:

az cognitiveservices account deployment show \
  --name your-openai-resource \
  --resource-group your-rg \
  --deployment-name my-gpt4o-deployment \
  --query provisioningState \
  --output tsv

Wait until the output is Succeeded before sending requests. In CI/CD pipelines, add a polling loop:

while [ "$(az cognitiveservices account deployment show \
  --name your-openai-resource \
  --resource-group your-rg \
  --deployment-name my-gpt4o-deployment \
  --query provisioningState -o tsv)" != "Succeeded" ]; do
  echo "Waiting for deployment..."
  sleep 10
done
echo "Deployment ready."

A Complete Working Example

Putting it all together — a properly configured Azure OpenAI client in a .NET application:

using Azure;
using Azure.AI.OpenAI;
using Microsoft.Extensions.Configuration;
using OpenAI.Chat;

var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json")
    .AddUserSecrets<Program>()
    .Build();

var endpoint = new Uri(config["AzureOpenAI:Endpoint"]
    ?? throw new InvalidOperationException("AzureOpenAI:Endpoint not configured"));

var apiKey = config["AzureOpenAI:ApiKey"]
    ?? throw new InvalidOperationException("AzureOpenAI:ApiKey not configured");

var deploymentName = config["AzureOpenAI:DeploymentName"]
    ?? throw new InvalidOperationException("AzureOpenAI:DeploymentName not configured");

var client = new AzureOpenAIClient(endpoint, new AzureKeyCredential(apiKey));
ChatClient chatClient = client.GetChatClient(deploymentName);

ChatCompletion completion = await chatClient.CompleteChatAsync(
    "What is the difference between a deployment and a model in Azure OpenAI?");

Console.WriteLine(completion.Content[0].Text);

Use explicit null checks for configuration values. A clear error message at startup — “DeploymentName not configured” — is far easier to debug than a cryptic 404 at runtime.

Prevention Checklist

  1. Always read deployment names from configuration. Never hardcode them. Different environments may use different deployment names.
  2. Validate configuration at startup. Fail fast with a clear error if endpoint, key, or deployment name is missing.
  3. Name deployments consistently. A naming convention like {model}-{environment} (e.g., gpt4o-prod, gpt4o-dev) reduces confusion.
  4. Use the Azure CLI in CI/CD. Verify deployments exist and are provisioned before running integration tests.
  5. Document the endpoint-deployment mapping. Maintain a table of which deployments live on which resources, especially when using multiple regions.

Further Reading

⚠ Production Considerations

  • Deployment names are case-sensitive. 'My-GPT4o' and 'my-gpt4o' are treated as different deployments.
  • Deleting and recreating a deployment with the same name can cause brief windows where the deployment returns not-found errors during provisioning.

🧠 Architect’s Note

Store deployment names in configuration alongside their endpoint URLs as a pair. When using multiple regions or resources for failover, maintain a mapping table rather than constructing deployment names dynamically.

AI-Friendly Summary

Summary

Model not found and deployment errors in Azure OpenAI occur when the deployment name is confused with the model name, the endpoint points to the wrong resource, the API version is incompatible, or the deployment has not finished provisioning. Fix by verifying the exact deployment name from Azure OpenAI Studio or CLI, confirming the endpoint matches the resource that hosts the deployment, and using a supported API version.

Key Takeaways

  • Always use the deployment name (your custom label) — not the model name — when calling Azure OpenAI
  • Endpoint URL and deployment name must belong to the same Azure OpenAI resource
  • Use 'az cognitiveservices account deployment list' to verify deployment names and provisioning state
  • API version mismatches can cause model routing failures — use a supported version for your model

Implementation Checklist

  • Verify deployment name in Azure OpenAI Studio Deployments page
  • Confirm endpoint URL matches the resource hosting the deployment
  • Check deployment provisioning state is 'Succeeded'
  • Validate API version compatibility with the model
  • Use Azure CLI to list deployments and confirm configuration

Frequently Asked Questions

What is the difference between model name and deployment name in Azure OpenAI?

The model name is the base model identifier like 'gpt-4o' or 'gpt-4o-mini'. The deployment name is the custom label you chose when deploying that model in your Azure OpenAI resource — like 'my-gpt4o-prod'. The Azure OpenAI SDK requires the deployment name, not the model name.

How do I find my Azure OpenAI deployment name?

Open Azure OpenAI Studio, click Deployments in the left menu, and look at the Name column. You can also list deployments via the Azure CLI: az cognitiveservices account deployment list --name your-resource --resource-group your-rg --output table.

Why does my Azure OpenAI call fail with 'model not found'?

This usually means you passed the model name (like 'gpt-4o') instead of the deployment name, the deployment has not finished provisioning, the deployment exists in a different Azure OpenAI resource than your endpoint points to, or the API version you specified does not support the model.

Related Articles

Was this article useful?

Feedback is anonymous and helps us improve content quality.

Discussion

Engineering discussion powered by GitHub Discussions.

#Azure OpenAI #Model Not Found #Deployment Errors #Configuration #.NET AI