Most vector database comparisons are written for Python developers — they focus on Python client libraries, Jupyter notebooks, and pip install commands. The .NET perspective is different. SDK maturity, dependency injection integration, Semantic Kernel connector stability, and NuGet package quality all factor into which vector database actually works well in a production C# application.
This article compares five options — Azure AI Search, Cosmos DB NoSQL, Qdrant, pgvector, and SQL Server 2025 — through the lens of a .NET team building retrieval-augmented generation (RAG) applications. Each section shows the DI registration, the record model, and honest assessments of what works and what does not.
1. Why .NET Developers Need a Different Comparison
The Python ecosystem has first-class bindings for virtually every vector database. LangChain alone wraps over 50 vector stores. The .NET ecosystem is more selective, and that selectivity matters when you are choosing an infrastructure component that will live in your application for years.
Three things define “good .NET support” for a vector database:
SDK quality — does the .NET client wrap the full API or just a subset? Is it async-first? Does it integrate with Microsoft.Extensions.Http and standard .NET patterns?
Semantic Kernel connector status — SK connectors let you swap providers without rewriting search logic. But connector quality varies enormously. Some are “Excellent” (Azure AI Search), some are “Stable” (Qdrant, Cosmos DB), and some are “Experimental” (Milvus) or have known bugs (Weaviate).
Microsoft.Extensions.VectorData compatibility — this abstraction layer, introduced alongside SK’s vector store unification effort, lets you write provider-agnostic code. If your chosen database has a VectorData provider, your application code survives a database migration without a rewrite.
The good news: for the five databases covered here, .NET support ranges from solid to excellent. The bad news: the two most popular Python-first vector databases (Weaviate and Milvus) have SK connectors with known issues that make them risky choices for production .NET applications.
2. The Microsoft.Extensions.VectorData Abstraction
Microsoft.Extensions.VectorData provides IVectorStore and IVectorStoreRecordCollection<TKey, TRecord> abstractions. The goal is to let you write vector storage code once and swap the underlying provider through DI configuration alone.
Install the abstraction package:
dotnet add package Microsoft.Extensions.VectorData --version 9.3.0
Define your record model using three attribute types:
using Microsoft.Extensions.VectorData;
[VectorStoreRecordCollection("articles")]
public class ArticleRecord
{
[VectorStoreRecordKey]
public string Id { get; set; } = string.Empty;
[VectorStoreRecordData(IsFilterable = true)]
public string Title { get; set; } = string.Empty;
[VectorStoreRecordData]
public string Content { get; set; } = string.Empty;
[VectorStoreRecordData(IsFilterable = true)]
public string Category { get; set; } = string.Empty;
[VectorStoreRecordVector(Dimensions: 1536, DistanceFunction: DistanceFunction.CosineSimilarity)]
public ReadOnlyMemory<float> ContentEmbedding { get; set; }
}
[VectorStoreRecordKey] marks the primary key. [VectorStoreRecordData] marks regular data fields — set IsFilterable = true for any field you want to filter on during search. [VectorStoreRecordVector] marks the embedding field and must specify the exact dimension count that matches your embedding model.
Write your search logic once against the abstraction:
public async Task<IList<ArticleRecord>> SearchArticlesAsync(
IVectorStoreRecordCollection<string, ArticleRecord> collection,
ReadOnlyMemory<float> queryEmbedding,
int topK = 5)
{
var searchOptions = new VectorSearchOptions<ArticleRecord>
{
Top = topK,
VectorPropertyName = nameof(ArticleRecord.ContentEmbedding)
};
var results = await collection.VectorizedSearchAsync(queryEmbedding, searchOptions);
var records = new List<ArticleRecord>();
await foreach (var result in results.Results)
{
records.Add(result.Record);
}
return records;
}
This method works identically whether the backing store is Qdrant locally or Azure AI Search in production. The only thing that changes is the DI registration.
3. Azure Cosmos DB NoSQL
Cosmos DB NoSQL added vector search support in 2024, and the Semantic Kernel connector reached stable status shortly after. If your application already stores its domain data in Cosmos, adding vector search without introducing a second database is compelling.
Install the connector:
dotnet add package Microsoft.SemanticKernel.Connectors.AzureCosmosDBNoSQL --version 1.54.0
Register in DI:
using Microsoft.SemanticKernel.Connectors.AzureCosmosDBNoSQL;
builder.Services.AddAzureCosmosDBNoSQLVectorStore(
new Uri(builder.Configuration["CosmosDB:Endpoint"]!),
new DefaultAzureCredential());
Strengths: Vector search lives inside the same Cosmos DB container as your application data. Hierarchical partitioning means you can scope vector searches to a partition key (tenant, user, document set) without filtering across unrelated data. Global distribution is built-in — you get multi-region replication without any application-layer configuration. Cosmos DB scales to millions of RU/s, so it handles aggressive read workloads that would require careful sharding in a purpose-built vector store.
When to choose: Your application already uses Cosmos DB for its primary data store. Or you need both global distribution and vector search in a single managed service, and you want to avoid the operational complexity of running two databases.
SK connector status: Stable. The Cosmos DB NoSQL connector is actively maintained and works reliably for production workloads.
Limitations: Cosmos DB vector search does not support hybrid search (BM25 + vector combined scoring). It is pure vector similarity only. If your documents need keyword matching alongside semantic similarity, you will need a second service or a different primary choice.
4. Azure AI Search
Azure AI Search is the best-supported vector database for .NET, full stop. The Azure.Search.Documents NuGet package has full Microsoft backing, comprehensive async APIs, and has been production-battle-tested for years as the primary Azure search service long before vector search existed.
Install the connector:
dotnet add package Microsoft.SemanticKernel.Connectors.AzureAISearch --version 1.54.0
dotnet add package Azure.Search.Documents
Register in DI:
using Microsoft.SemanticKernel.Connectors.AzureAISearch;
builder.Services.AddAzureAISearchVectorStore(
new Uri(builder.Configuration["AzureSearch:Endpoint"]!),
new AzureKeyCredential(builder.Configuration["AzureSearch:Key"]!));
Strengths: Azure AI Search is the only option in this comparison that supports hybrid search — combining BM25 keyword scoring with vector similarity scoring, then optionally applying semantic reranking on top. For production RAG applications, hybrid search outperforms pure vector search on most real-world queries. Users rarely phrase queries in perfect semantic-match language; keyword fallback catches edge cases that pure embedding search misses.
Security filters let you apply document-level access control at query time — critical for multi-tenant applications. Facets enable structured navigation alongside vector search. For a complete end-to-end implementation using this connector, see Build a Semantic Search API with .NET and Azure AI Search.
When to choose: You need hybrid search (usually the right default for production RAG), you need document-level security filtering, or your documents already live in Azure and you want a fully managed search service.
SK connector status: Excellent. The Azure AI Search SK connector is the most actively maintained vector connector in the Semantic Kernel repository. It receives updates with every SK release.
Limitations: Azure AI Search charges per search unit per hour, even when idle. This makes it expensive for development and low-traffic staging environments. See the production pitfalls section above.
5. Qdrant
Qdrant is a purpose-built vector database with the best local development story of any option in this comparison. A single Docker command gets you a running vector store with a web dashboard at http://localhost:6333/dashboard.
docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant
Install the connector:
dotnet add package Microsoft.SemanticKernel.Connectors.Qdrant --version 1.54.0
Register in DI — with environment-aware configuration:
using Microsoft.SemanticKernel.Connectors.Qdrant;
// Local development (Docker)
builder.Services.AddQdrantVectorStore("localhost", port: 6334, https: false);
// Production (Qdrant Cloud)
builder.Services.AddQdrantVectorStore(
builder.Configuration["Qdrant:Host"]!,
apiKey: builder.Configuration["Qdrant:ApiKey"]);
Strengths: The local development experience is unmatched. No Azure subscription required, no emulator setup, no environment variables to configure — just run the Docker image and your application connects. Qdrant’s filter API is the richest of any option here: you can filter on multiple fields simultaneously using must, should, and must_not conditions, which enables complex access-control and scoping patterns without index-per-tenant workarounds. Qdrant’s gRPC transport (port 6334) is fast even at small scale, and the REST API (port 6333) works for tooling and dashboards.
Qdrant Cloud provides a fully managed hosted option with a generous free tier for small collections, making the dev-to-production path straightforward.
When to choose: You want local AI development with zero Azure dependencies. Or you prefer a purpose-built vector database where the .NET client is a first-class supported client (not an afterthought). Or you need rich filter expressions that other providers do not support.
SK connector status: Stable, actively maintained. The Qdrant connector receives updates alongside the main SK releases.
6. pgvector (PostgreSQL)
pgvector is a PostgreSQL extension that adds a native vector data type and approximate nearest neighbor search algorithms (HNSW and IVFFlat) directly to PostgreSQL. If your team already operates PostgreSQL, CREATE EXTENSION vector is the lowest-cost path to adding vector search to your application.
Install packages:
dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL
dotnet add package Pgvector
Define your EF Core model and configure the extension:
using Npgsql;
using Microsoft.EntityFrameworkCore;
public class AppDbContext : DbContext
{
public DbSet<ArticleDocument> Articles { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.HasPostgresExtension("vector");
modelBuilder.Entity<ArticleDocument>()
.HasIndex(a => a.Embedding)
.HasMethod("hnsw")
.HasOperators("vector_cosine_ops");
}
}
public class ArticleDocument
{
public int Id { get; set; }
public string Title { get; set; } = string.Empty;
public string Content { get; set; } = string.Empty;
[Column(TypeName = "vector(1536)")]
public Vector Embedding { get; set; } = null!;
}
Perform a similarity search with raw SQL:
public async Task<List<ArticleDocument>> SearchAsync(
AppDbContext db,
float[] queryEmbedding,
int topK = 5)
{
var vectorParam = new Vector(queryEmbedding);
return await db.Articles
.OrderBy(a => a.Embedding.CosineDistance(vectorParam))
.Take(topK)
.ToListAsync();
}
Strengths: No new infrastructure component. If your team already runs PostgreSQL (on-premises, AWS RDS, Azure Database for PostgreSQL, or Supabase), you get vector search by enabling an extension. EF Core integration means your vector searches participate in the same transactions as your relational operations. Adding an article and its embedding in the same transaction eliminates consistency gaps that appear when writing to two separate stores.
When to choose: Your team runs PostgreSQL already and vector search is a secondary feature rather than the application’s primary differentiator. Or you need transactional consistency between relational data and vector data in the same database commit.
SK connector status: The pgvector SK connector is experimental as of early 2026. Use Npgsql directly or through EF Core for production workloads.
Limitations: For very high query volumes — millions of vectors with sub-100ms latency requirements — purpose-built vector databases like Qdrant outperform pgvector. HNSW indexing in PostgreSQL works well for most workloads but does not reach the query throughput of dedicated vector engines under sustained load.
7. SQL Server 2025
SQL Server 2025 introduces a native VECTOR(n) column type and built-in VECTOR_DISTANCE function, making it the only option in this comparison that requires no extension or add-on. It is the natural choice for .NET teams with existing SQL Server infrastructure who want vector search without adding a new service to their operations runbook.
Define the schema in T-SQL:
CREATE TABLE Articles (
Id INT PRIMARY KEY,
Title NVARCHAR(500),
Content NVARCHAR(MAX),
Embedding VECTOR(1536) -- native vector column type, up to 1998 dimensions
);
Query with VECTOR_DISTANCE:
using var connection = new SqlConnection(connectionString);
var query = """
SELECT TOP 5 Id, Title, VECTOR_DISTANCE('cosine', Embedding, @queryEmbedding) AS Distance
FROM Articles
ORDER BY Distance ASC
""";
// Serialize the query embedding as JSON array for the parameter
var embeddingJson = JsonSerializer.Serialize(queryEmbedding);
var results = await connection.QueryAsync<ArticleResult>(
query, new { queryEmbedding = embeddingJson });
Strengths: No new service to deploy or operate. Existing SQL Server licenses apply. T-SQL querying of vectors means DBAs can write ad hoc similarity queries using familiar tools. Existing SQL Server tooling (SSMS, Azure Data Studio, SQL Server Profiler) works without modification. For teams already on SQL Server, the operational overhead of adopting vector search is minimal.
When to choose: Your team has deep SQL Server expertise, you are already running SQL Server 2025, and you have no budget or appetite for adding new infrastructure. Or you need relational and vector queries in the same T-SQL statement.
SK connector status: Preview as of early 2026. Use raw Microsoft.Data.SqlClient for production reliability rather than the SK connector until the connector stabilizes.
Limitations: SQL Server 2025 supports up to 1998 dimensions per vector column. This covers all standard embedding models (text-embedding-3-small at 1536 dims, text-embedding-3-large at 3072 dims exceeds the limit — use the dimensionality reduction parameter to bring it down to 1536). ANN index support is limited compared to purpose-built vector databases.
8. Decision Matrix
| Criteria | Azure AI Search | Cosmos DB | Qdrant | pgvector | SQL Server 2025 |
|---|---|---|---|---|---|
| Hybrid search | Best | No | No | No | No |
| Local dev | Emulator only | Emulator only | Best (Docker) | Good (Docker) | Good (LocalDB) |
| Existing team skill | Azure | Azure + NoSQL | New tooling | PostgreSQL | SQL Server |
| SK connector | Excellent | Stable | Stable | Experimental | Preview |
| Hosted/managed | Yes | Yes | Yes (Cloud) | Yes | Yes |
| Cost at small scale | $$$ | $$ | $ | $ | $ |
| Hybrid/relational data | No | Partial | No | Yes (same TX) | Yes (same TX) |
| Max dimensions | Unlimited | Unlimited | Unlimited | Configurable | 1998 |
The default recommendation for production RAG: Azure AI Search for its hybrid search and SK connector quality, with Qdrant locally during development. The IVectorStore abstraction makes the switch between environments a DI configuration change.
The default recommendation for new PostgreSQL teams: pgvector, no contest. Extend what you have.
The default recommendation for SQL Server shops: SQL Server 2025 if you are already on it. No new infrastructure, familiar tooling, and the native vector type covers all standard embedding model dimensions with the reduction parameter.
9. Known Issues
Not every vector database with a Semantic Kernel connector is production-ready in .NET. Three connectors warrant explicit warnings:
Weaviate SK connector: Has known stability issues with multi-vector queries (SK Issue #10230 as of early 2026). Single-vector basic search works, but applications that run multiple vector searches simultaneously or use complex multi-hop retrieval patterns can encounter errors. Not recommended for production .NET workloads without thorough integration testing against your specific query patterns.
Milvus SK connector: The Milvus connector carries experimental status, which means breaking API changes can arrive in minor SK version updates. It works for basic scenarios in development but is not suitable for production code that needs connector API stability across SK upgrades.
Pinecone SK connector: The connector works and is more stable than Weaviate and Milvus. However, Pinecone’s filtering capabilities are more limited than Qdrant’s when working through the SK abstraction. If you need rich field filtering (multiple simultaneous conditions), Qdrant is the better fit.
For all three: if your team has a strong operational reason to use one of these databases (existing Weaviate or Milvus infrastructure, Pinecone contracts), the connectors will function. But factor in the additional testing burden and the risk of connector changes in future SK releases.
Pulling It Together
The right choice is almost always the vector database your team can operate without adding significant new expertise:
- Azure team with existing Azure infrastructure → Azure AI Search for production, Qdrant locally
- PostgreSQL team → pgvector immediately, no new infrastructure
- SQL Server team on 2025 → SQL Server 2025 native vector, use
SqlClientuntil the SK connector matures - Cosmos DB-first application → Cosmos DB NoSQL vector search, accept no hybrid search
- Starting fresh, want the best local dev story → Qdrant in Docker, graduate to Qdrant Cloud or Azure AI Search when scale demands
Whatever you choose, use Microsoft.Extensions.VectorData from day one. The IVectorStore abstraction costs nothing and preserves your ability to migrate providers as your application’s scale and requirements evolve.
Further Reading
- Microsoft.Extensions.VectorData on NuGet
- Azure AI Search vector search documentation
- Qdrant .NET client on NuGet
- pgvector on GitHub