Skip to main content

Fix Embedding Dimension Mismatch in Azure AI Search with .NET

From GitHub Issue .NET 9 Azure.Search.Documents 11.7.0 Azure.AI.OpenAI 2.1.0
By Rajesh Mishra · Mar 21, 2026 · 9 min read
Verified Mar 2026 .NET 9 Azure.Search.Documents 11.7.0
In 30 Seconds

Embedding dimension mismatch in Azure AI Search occurs when the vector field's configured dimension differs from the query embedding's dimension. Fix by identifying which embedding model and dimension setting you're using, deleting and recreating the index with the correct dimension, and re-indexing all documents. Use index aliases for zero-downtime migration. Always validate embedding dimensions before indexing with embeddings.Value[0].Vector.Length.

⚠️
Error Fix Guide

Root cause analysis and verified fix. Code examples use Azure.Search.Documents 11.7.0.

✓ SOLVED

The Error

Your .NET application performs a vector search against Azure AI Search and receives:

Azure.RequestFailedException: The dimensions of the vector field 'embedding' do not match the dimensions of the query vector.
Status: 400 (Bad Request)
ErrorCode: InvalidRequestParameter

This is a configuration mismatch, not a transient failure. The index schema defines a specific vector dimension when the index is created. Azure AI Search rejects any query vector whose dimension differs from that configured value — and it rejects any document vector during indexing too.

Root Cause: Embedding Model Dimensions

Different Azure OpenAI embedding models produce different-length vectors, and some models allow you to reduce dimensions at request time. If the dimension you use at query time does not match the dimension stored in the index, you get this error.

ModelDefault DimensionsAdjustable?Range
text-embedding-ada-0021536NoFixed
text-embedding-3-small1536Yes256–1536
text-embedding-3-large3072Yes256–3072

The dimension mismatch usually happens in one of these scenarios:

  • You indexed documents with text-embedding-ada-002 (1536 dims) but then switched to text-embedding-3-large (3072 dims) for queries without recreating the index
  • You used EmbeddingGenerationOptions { Dimensions = 512 } during indexing but forgot that option at query time, getting 1536 dims instead
  • You created the index schema manually with the wrong dimension integer (e.g., 768 instead of 1536)

For a complete guide on building a vector search pipeline, the Vector Database Comparison for .NET Developers university article covers dimension tradeoffs across different vector stores.

Fix 1: Identify and Correct Field Dimensions

Check your current index definition first. If you built the index with FieldBuilder, the dimension comes from the [VectorSearchField] attribute on your model class. If you built it manually, check the VectorSearchField constructor call.

Here is how to create an index with explicitly correct dimensions:

using Azure.Search.Documents.Indexes;
using Azure.Search.Documents.Indexes.Models;

var index = new SearchIndex("articles")
{
    Fields =
    {
        new SimpleField("id", SearchFieldDataType.String) { IsKey = true },
        new SearchableField("title"),
        new SearchableField("content"),
        new VectorSearchField("embedding", dimensions: 1536, vectorSearchProfileName: "default-profile")
        //                                              ^^^^ Must match your embedding model's output dimension
    },
    VectorSearch = new VectorSearch
    {
        Profiles =
        {
            new VectorSearchProfile("default-profile", "default-algorithm")
        },
        Algorithms =
        {
            new HnswAlgorithmConfiguration("default-algorithm")
        }
    }
};

var indexClient = new SearchIndexClient(
    new Uri(config["AzureSearch:Endpoint"]!),
    new AzureKeyCredential(config["AzureSearch:AdminKey"]!));

await indexClient.CreateOrUpdateIndexAsync(index);

If you use model-attribute-based field building, apply the dimension to the attribute:

using Azure.Search.Documents.Indexes;

public class ArticleDocument
{
    [SimpleField(IsKey = true)]
    public string Id { get; set; } = "";

    [SearchableField]
    public string Title { get; set; } = "";

    [SearchableField]
    public string Content { get; set; } = "";

    [VectorSearchField(VectorSearchDimensions = 1536, VectorSearchProfileName = "default-profile")]
    public IReadOnlyList<float>? Embedding { get; set; }
}

Fix 2: Use the Correct Field Type — Collection(Edm.Single)

Azure AI Search vector fields must use single-precision floating point, which maps to Collection(Edm.Single). Using double-precision (Collection(Edm.Double)) causes a type mismatch error that looks similar to the dimension mismatch.

When you use the VectorSearchField class from Azure.Search.Documents, the correct type is applied automatically. Problems arise when developers define the field as a raw SearchField or when reading index definitions from JSON:

// ❌ Wrong — manually specifying the wrong type
new SearchField("embedding", SearchFieldDataType.Collection(SearchFieldDataType.Double))

// ✅ Correct — VectorSearchField always uses Collection(Edm.Single) automatically
new VectorSearchField("embedding", dimensions: 1536, vectorSearchProfileName: "default-profile")

If you defined your index in the Azure portal or via REST API JSON, verify the field type is Collection(Edm.Single) and not Collection(Edm.Double).

Fix 3: Validate Embedding Dimensions Before Indexing

Add a dimension check to your indexing pipeline so mismatches fail fast at startup rather than during production queries. The Vector.Length property on the embedding result gives you the actual output dimension:

using Azure.AI.OpenAI;
using OpenAI.Embeddings;

var embeddingClient = azureOpenAIClient.GetEmbeddingClient("text-embedding-3-small");

// Generate a single test embedding to verify the output dimension
var testResult = await embeddingClient.GenerateEmbeddingAsync("dimension check");
var actualDimension = testResult.Value.Vector.Length;

const int expectedDimension = 1536; // Must match your index definition
if (actualDimension != expectedDimension)
{
    throw new InvalidOperationException(
        $"Embedding dimension mismatch: index expects {expectedDimension}, " +
        $"model produces {actualDimension}. Recreate the index or adjust EmbeddingGenerationOptions.Dimensions.");
}

For models that support dimension reduction, pass the same EmbeddingGenerationOptions at both indexing and query time:

// Use reduced dimensions — pass this options object everywhere
var embeddingOptions = new EmbeddingGenerationOptions { Dimensions = 512 };

// At indexing time
var indexEmbedding = await embeddingClient.GenerateEmbeddingAsync(document.Content, embeddingOptions);

// At query time — same options, same dimension output
var queryEmbedding = await embeddingClient.GenerateEmbeddingAsync(userQuery, embeddingOptions);

Console.WriteLine($"Index embedding dimension: {indexEmbedding.Value.Vector.Length}"); // 512
Console.WriteLine($"Query embedding dimension: {queryEmbedding.Value.Vector.Length}"); // 512

Your index’s VectorSearchField must be created with dimensions: 512 to match.

Fix 4: Delete and Recreate the Index

Azure AI Search does not support in-place changes to vector field dimensions. If your current index has the wrong dimension, you must delete it and recreate it with the correct value, then re-index all documents.

// Step 1: Delete the index with incorrect dimensions
await indexClient.DeleteIndexAsync("articles");

// Step 2: Recreate with the corrected dimension
var correctedIndex = new SearchIndex("articles")
{
    Fields =
    {
        new SimpleField("id", SearchFieldDataType.String) { IsKey = true },
        new SearchableField("title"),
        new SearchableField("content"),
        new VectorSearchField("embedding", dimensions: 1536, vectorSearchProfileName: "default-profile")
    },
    VectorSearch = new VectorSearch
    {
        Profiles = { new VectorSearchProfile("default-profile", "default-algorithm") },
        Algorithms = { new HnswAlgorithmConfiguration("default-algorithm") }
    }
};

await indexClient.CreateOrUpdateIndexAsync(correctedIndex);

// Step 3: Re-embed and re-index all documents
var searchClient = indexClient.GetSearchClient("articles");
var embeddingClient = azureOpenAIClient.GetEmbeddingClient("text-embedding-3-small");

var allDocuments = await GetAllDocumentsFromStorageAsync(); // Your retrieval logic
var embeddings = await embeddingClient.GenerateEmbeddingsAsync(
    allDocuments.Select(d => d.Content).ToList());

var indexBatch = allDocuments.Zip(embeddings.Value, (doc, emb) =>
{
    doc.Embedding = emb.ToFloats().ToArray();
    return doc;
}).ToList();

await searchClient.IndexDocumentsAsync(
    IndexDocumentsBatch<ArticleDocument>.Upload(indexBatch));

For production systems where deleting the live index causes downtime, use index aliases instead (Fix 5).

Fix 5: Zero-Downtime Migration with Index Aliases

Azure AI Search index aliases let your application point to a logical name (the alias) while the physical index can be swapped underneath — giving you zero-downtime re-indexing. This is the recommended approach when your search index is serving live traffic.

The full workflow for this is also covered in the Build a Semantic Search API workshop, which shows how to structure a re-indexing pipeline end to end.

// Step 1: Your application currently uses alias "articles-current" → articles-v1

// Step 2: Create the new index with corrected dimensions
var newIndex = new SearchIndex("articles-v2")
{
    Fields =
    {
        new SimpleField("id", SearchFieldDataType.String) { IsKey = true },
        new SearchableField("title"),
        new SearchableField("content"),
        new VectorSearchField("embedding", dimensions: 3072, vectorSearchProfileName: "default-profile")
        // New dimension for text-embedding-3-large
    },
    VectorSearch = new VectorSearch
    {
        Profiles = { new VectorSearchProfile("default-profile", "default-algorithm") },
        Algorithms = { new HnswAlgorithmConfiguration("default-algorithm") }
    }
};

await indexClient.CreateOrUpdateIndexAsync(newIndex);

// Step 3: Re-index all documents into articles-v2
// (articles-v1 is still live via the alias during this time)
var searchClient_v2 = indexClient.GetSearchClient("articles-v2");
await searchClient_v2.IndexDocumentsAsync(
    IndexDocumentsBatch<ArticleDocument>.Upload(reembeddedDocuments));

// Step 4: Atomically point the alias to the new index
var alias = new SearchAlias("articles-current", "articles-v2");
await indexClient.CreateOrUpdateAliasAsync(alias);
// From this moment, all queries go to articles-v2

// Step 5: Delete the old index (now safe — no traffic)
await indexClient.DeleteIndexAsync("articles-v1");

Your application’s SearchClient should always be constructed against the alias name, never the versioned index name:

// ✅ Use alias — survives index recreation
var searchClient = new SearchClient(
    new Uri(config["AzureSearch:Endpoint"]!),
    "articles-current",   // alias name
    new AzureKeyCredential(config["AzureSearch:AdminKey"]!));

// ❌ Hardcoded index name — breaks on re-index
var searchClient = new SearchClient(..., "articles-v1", ...);

Embedding Model Migration Reference

When switching embedding models, every document in the index must be re-embedded with the new model. Mixing embeddings from different models in the same index produces silently wrong search results — the cosine similarity calculation is meaningless when comparing vectors from different model spaces.

ScenarioAction Required
ada-002 → text-embedding-3-small (same 1536 dims)Delete index, re-embed all docs, recreate
text-embedding-3-small → text-embedding-3-large (1536 → 3072)Delete index, re-embed all docs, recreate
text-embedding-3-small at 1536 → same model at 512Delete index, re-embed all docs with Dimensions=512, recreate
Any model → same model, same dim, same optionsNo re-indexing needed

Further Reading

⚠ Production Considerations

  • Reducing embedding dimensions (e.g., from 1536 to 512) improves query speed but reduces semantic accuracy. Test retrieval quality with your specific documents and queries at the reduced dimension before committing. A 20% dimension reduction may cause a 3-5% accuracy drop, but a 70% reduction can significantly degrade recall.
  • If you regenerate embeddings with a different model version or dimension, all existing embeddings become incompatible. Document the exact model name, API version, and dimensions parameter used for each index in your runbook. This prevents dimension drift when rotating models.

Enjoying this article?

Get weekly .NET + AI insights delivered to your inbox. No spam.

Subscribe Free →

🧠 Architect’s Note

Version your embedding model as you version your schema. Store the model name and dimensions used to generate embeddings alongside the indexed document. When you need to switch models, you know exactly which documents to re-embed and which indexes to recreate.

AI-Friendly Summary

Summary

Embedding dimension mismatch in Azure AI Search occurs when the vector field's configured dimension differs from the query embedding's dimension. Fix by identifying which embedding model and dimension setting you're using, deleting and recreating the index with the correct dimension, and re-indexing all documents. Use index aliases for zero-downtime migration. Always validate embedding dimensions before indexing with embeddings.Value[0].Vector.Length.

Key Takeaways

  • ada-002 = fixed 1536 dims; text-embedding-3-small = default 1536 (adjustable); text-embedding-3-large = default 3072 (adjustable)
  • Vector field type must be Collection(Edm.Single) — NOT Collection(Edm.Double)
  • Index field dimensions cannot be changed in place — must delete and recreate the index
  • Use index aliases for zero-downtime re-indexing
  • Validate with embeddings.Value[0].Vector.Length before indexing

Implementation Checklist

  • Identify your embedding model and its configured dimension
  • Check VectorSearchField dimensions parameter matches your embedding dimension
  • Verify vector field type is Collection(Edm.Single) not Collection(Edm.Double)
  • Delete and recreate the index if dimensions are wrong
  • Re-index all documents with the correct embedding model and dimension
  • Add dimension validation to your indexing pipeline

Frequently Asked Questions

What error does a vector field dimension mismatch cause in Azure AI Search?

Azure AI Search returns: 'Azure.RequestFailedException: The dimensions of the vector field X do not match the dimensions of the query vector.' This occurs when the embedding dimension used at query time differs from the dimension configured in the index field definition.

What are the embedding dimensions for Azure OpenAI embedding models?

text-embedding-ada-002: fixed 1536 dimensions, cannot be changed. text-embedding-3-small: default 1536 dimensions, adjustable to 256-1536 via the dimensions parameter. text-embedding-3-large: default 3072 dimensions, adjustable to 256-3072 via the dimensions parameter.

Can I change the dimensions of an existing Azure AI Search vector field?

No. Azure AI Search does not support in-place modification of vector field dimensions. You must delete and recreate the index with the correct dimension value, then re-index all documents. Use index aliases to implement zero-downtime re-indexing.

What is the correct field type for vector fields in Azure AI Search?

Use Collection(Edm.Single) — single-precision floating point. Using Collection(Edm.Double) (double-precision) causes a type mismatch error. This is the most common configuration mistake when migrating from other vector databases.

How do I implement zero-downtime re-indexing in Azure AI Search?

Use Azure AI Search index aliases. Create a new index with corrected dimensions (e.g., articles-v2). Re-index all documents into articles-v2 while articles-v1 continues serving traffic. Update the alias (articles-current) to point to articles-v2. Delete articles-v1. Your application always queries the alias, never a specific version.

How do I validate embedding dimensions before indexing in C#?

After generating embeddings, check the Vector.Length property of the first embedding result: var embeddings = await embeddingClient.GenerateEmbeddingsAsync(texts); var dims = embeddings.Value[0].Vector.Length; Assert dims == your index's configured dimension. Add this assertion to your indexing pipeline startup.

Can I reduce embedding dimensions to save storage and query cost?

Yes for text-embedding-3-small and text-embedding-3-large. Set EmbeddingsOptions.Dimensions = 512 to reduce output to 512 dimensions. Match this value exactly in your Azure AI Search VectorSearchField(dimensions: 512). Lower dimensions reduce storage and query speed but may slightly reduce semantic accuracy.

You Might Also Enjoy

Was this article useful?

Feedback is anonymous and helps us improve content quality.

Discussion

Engineering discussion powered by GitHub Discussions.

#Azure AI Search #Embedding Dimension #Error Fix #RAG #.NET AI