Skip to main content

Microsoft.Extensions.AI vs Semantic Kernel vs Agent Framework

Intermediate Original .NET 9 Microsoft.Extensions.AI 10.3.0 Microsoft.SemanticKernel 1.54.0 Microsoft.SemanticKernel.Agents.Core 1.54.0
By Rajesh Mishra · Mar 21, 2026 · 12 min read
Verified Mar 2026 .NET 9 Microsoft.Extensions.AI 10.3.0
In 30 Seconds

Microsoft.Extensions.AI (MEAI), Semantic Kernel (SK), and Agent Framework (MAF) form a three-layer stack — not competing frameworks. MEAI provides the IChatClient and IEmbeddingGenerator abstractions; SK adds plugins, filters, prompt templates, and chat history; MAF adds ChatCompletionAgent and multi-agent orchestration via AgentGroupChat. Choose the layer that matches your complexity: MEAI for simple AI calls or library code, SK for orchestrated features, MAF for autonomous agent workflows.

The Confusion — Three Frameworks, Overlapping Names

If you search for how to build an AI feature in .NET, you’ll find three Microsoft packages staring back at you: Microsoft.Extensions.AI, Microsoft.SemanticKernel, and Microsoft.SemanticKernel.Agents.Core. The official docs cover each in isolation. The names overlap — “AI”, “Kernel”, “Agents” — and it’s not immediately obvious whether you need one, two, or all three.

This confusion is well-documented by the community. SK GitHub discussion #12779 captures it precisely: developers can’t tell whether MEAI replaces SK, whether the Agent Framework replaces SK, or whether they’re supposed to use all three simultaneously. The uncertainty causes teams to either over-engineer (pulling in Agent Framework for a simple chatbot) or under-engineer (writing raw HTTP calls when they should be using SK’s plugin system).

The core insight that resolves all of this: these are not competing frameworks. They are a layered stack. Each layer builds on the one below it. You adopt layers progressively as your complexity grows — and you only adopt the next layer when you genuinely need what it provides.

Understanding this layering is the entire point of this guide. Once you see the stack clearly, every “which framework should I use?” question has an obvious answer.

The Three-Layer Stack

The mental model is straightforward:

  • Microsoft.Extensions.AI is the foundation — provider abstractions that the rest of the stack builds on
  • Semantic Kernel is orchestration — plugins, filters, templates, auto function calling
  • Agent Framework is agentic automation — autonomous loops, multi-agent coordination, checkpointing
LayerPackageWhat It Provides
FoundationMicrosoft.Extensions.AIIChatClient, IEmbeddingGenerator, provider DI
OrchestrationMicrosoft.SemanticKernelPlugins, filters, prompt templates, auto function calling
AgentsMicrosoft.SemanticKernel.Agents.CoreChatCompletionAgent, AgentGroupChat, checkpointing
Microsoft.Extensions.AIIChatClient · IEmbeddingGeneratorSemantic KernelPlugins · Filters · Chat HistoryAgent FrameworkChatCompletionAgent · AgentGroupChatAzure OpenAI / Ollama / OpenAI implementsSK builds on MEAIMAF orchestrates SK
The three-layer Microsoft AI stack for .NET. Each layer is additive — you include the next layer only when you need what it provides.

The arrows in the diagram tell the whole story. Providers implement MEAI’s interfaces. SK builds on MEAI. MAF orchestrates SK. You start at the bottom and add layers only when the layer below stops being sufficient for your use case.

When to Use Microsoft.Extensions.AI Directly

Microsoft.Extensions.AI is the right starting point for most .NET AI integration work. Use it directly — without SK or MAF on top — when:

You are building a library. If you’re shipping a NuGet package that makes AI calls, don’t force consumers to take a dependency on Semantic Kernel. Program against IChatClient and let the consumer decide which provider and orchestration framework to use. MEAI is designed for this — it’s the stable, minimal interface layer that library authors should depend on.

You need simple chat or embedding. If your feature sends a message and gets a response, you don’t need the orchestration overhead of SK. A direct IChatClient call is faster to write, easier to test, and has fewer moving parts.

You need provider switching without rewrites. MEAI’s DI registration means you swap Azure OpenAI for Ollama by changing one line. No call sites change. No interfaces change. This is valuable for teams that need to support multiple deployment environments (Azure for production, Ollama for local dev).

Your team is new to AI integration. MEAI’s surface area is small — two interfaces, two methods each. It’s the safest starting point for teams building their first AI feature. You can always add SK on top later without breaking existing code.

Here is the minimal MEAI setup in an ASP.NET Core application:

using Microsoft.Extensions.AI;

// In Program.cs / DI setup
builder.Services.AddAzureOpenAIChatClient(
    new Uri(builder.Configuration["AzureOpenAI:Endpoint"]!),
    new AzureKeyCredential(builder.Configuration["AzureOpenAI:ApiKey"]!));

// In your service class
public class SupportService(IChatClient chatClient)
{
    public async Task<string> AnswerAsync(string question, CancellationToken ct = default)
    {
        var messages = new List<ChatMessage>
        {
            new(ChatRole.System, "You are a helpful support agent."),
            new(ChatRole.User, question)
        };

        var result = await chatClient.CompleteAsync(messages, cancellationToken: ct);
        return result.Message.Text ?? string.Empty;
    }
}

Notice what is absent: no Kernel, no plugins, no planning. The service depends on a single IChatClient interface that any MEAI-compatible provider satisfies. Swap the DI registration to change providers — SupportService never changes.

This is also the pattern for library code. Ship your NuGet package with IChatClient as a constructor parameter, and your consumers can inject whatever provider configuration they prefer.

For a practical, end-to-end example of MEAI used directly in a .NET 9 Minimal API — including streaming SSE, structured output, embeddings, and Container Apps deployment — see Build an AI-Powered Minimal API with .NET 9 and Azure OpenAI.

When to Add Semantic Kernel

Add Semantic Kernel on top of MEAI when your requirements outgrow what IChatClient alone can satisfy. SK is the right layer when you need:

Plugins. The moment you want the LLM to call your C# code — look up an order status, query a database, trigger a workflow — you need SK’s plugin system. Methods decorated with [KernelFunction] are exposed to the model with their type signatures and descriptions. The model decides when to call them based on user intent.

Auto function calling. SK’s FunctionChoiceBehavior.Auto() implements the tool-use loop automatically: send the prompt, receive a tool call response, invoke the function, send the result back, repeat until the model returns a final answer. Without SK, you implement this loop manually.

Prompt templates. KernelPromptTemplateFactory with Handlebars or SK’s own template syntax lets you define reusable prompt patterns with variable substitution, partial templates, and helper functions. For applications with many prompt variations, this is significantly cleaner than string interpolation.

Filters and middleware. IFunctionInvocationFilter gives you a hook around every function call — useful for caching expensive tool calls, logging function inputs and outputs, enforcing rate limits at the function level, or rejecting unsafe tool calls before they execute.

Chat history management. ChatHistory with SK handles serialization, role management, and context window management across multi-turn conversations. For anything beyond a single-turn interaction, this saves significant boilerplate.

Here is SK layered on top of MEAI, adding auto function calling:

using Microsoft.SemanticKernel;

builder.Services.AddKernel()
    .AddAzureOpenAIChatCompletion(
        deploymentName: builder.Configuration["AzureOpenAI:Deployment"]!,
        endpoint: builder.Configuration["AzureOpenAI:Endpoint"]!,
        apiKey: builder.Configuration["AzureOpenAI:ApiKey"]!);

// Register a plugin with auto function calling
builder.Services.AddSingleton<WeatherPlugin>();

// In your service, inject Kernel
public class AssistantService(Kernel kernel)
{
    public async Task<string> ChatAsync(string userMessage, CancellationToken ct = default)
    {
        var settings = new OpenAIPromptExecutionSettings
        {
            FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
        };

        var result = await kernel.InvokePromptAsync(
            userMessage,
            new KernelArguments(settings),
            cancellationToken: ct);

        return result.GetValue<string>() ?? string.Empty;
    }
}

AddKernel() registers the Kernel in DI alongside the AI connectors. WeatherPlugin is a class whose methods are decorated with [KernelFunction] — the model can call them automatically when the user asks about weather. The FunctionChoiceBehavior.Auto() setting enables the tool-use loop without any additional code.

SK builds on MEAI’s interfaces internally. You don’t lose the provider-agnostic benefits — you gain orchestration on top of them.

When to Adopt Agent Framework

Add the Agent Framework (MAF) when Semantic Kernel’s InvokePromptAsync loop is no longer sufficient. MAF is the right layer when you need:

Autonomous agent loops. A ChatCompletionAgent runs its own reasoning loop — it decides what to do next, calls tools, evaluates results, and continues until it reaches a conclusion or hits a stopping condition. This is qualitatively different from SK’s single-invocation model, where you drive the loop externally.

Multi-agent coordination. AgentGroupChat routes tasks between multiple specialized agents. You might have a ResearchAgent, a WritingAgent, and a ReviewAgent — each with different instructions and plugin sets — coordinating to produce a final output. MAF manages the message routing and termination conditions between them.

Long-running workflows with checkpointing. MAF’s checkpointing serializes agent state so workflows can survive restarts. For workflows that run for minutes or hours, this is essential for production reliability.

Named agents with personas. ChatCompletionAgent has explicit Name and Instructions properties. This makes agent identities explicit and traceable, which matters when debugging multi-agent conversations or when different agents need to address each other by name.

Here is the Agent Framework pattern for a single autonomous agent:

using Microsoft.SemanticKernel.Agents;

// Wrap an existing Kernel in an agent
var agent = new ChatCompletionAgent
{
    Kernel = kernel,
    Name = "SupportAgent",
    Instructions = "You are a technical support agent for .NET developers. Use the available tools to diagnose issues."
};

// The agent maintains its own chat thread
var thread = new ChatHistoryAgentThread();

await foreach (var message in agent.InvokeAsync(
    new ChatMessageContent(AuthorRole.User, userQuery),
    thread))
{
    Console.Write(message.Content);
}

The key difference from SK’s InvokePromptAsync is the thread object. The agent manages its own conversation state — including multi-turn tool use — across the invocation. You don’t maintain the chat history externally; the agent does.

Notice also that ChatCompletionAgent wraps a Kernel. This means all your existing SK plugins work immediately inside an Agent Framework agent. The migration from SK to MAF preserves your plugin investment entirely.

Decision Table

Use this table when evaluating which layer to adopt:

CriterionUse MEAIAdd SKAdd MAF
Simple chat/embed
Auto function calling
Plugin ecosystem
Multi-agent coordination
Autonomous reasoning loopPartial
Library/SDK authoring
Team new to AIAfter MEAIAfter SK

The “partial” for SK’s autonomous reasoning loop means: SK can simulate a basic agent loop with FunctionChoiceBehavior.Auto() and enough tool calls, but it doesn’t provide an explicit agent abstraction with identity, thread management, or termination conditions. For production agentic systems, MAF’s explicit model is more robust.

The library authoring row deserves emphasis: if you’re writing a reusable package, stop at MEAI. Forcing consumers to take SK as a transitive dependency is poor library design. IChatClient is the stable, minimal surface your library should expose.

Migration Paths

From SK Planner to Auto Function Calling

SK’s Handlebars Planner and Sequential Planner were deprecated in SK 1.x. If you have existing code using planners, migrate to FunctionChoiceBehavior.Auto(). For a complete migration walkthrough, see Semantic Kernel Planners Are Deprecated — How to Migrate to Function Calling.

The mechanical change is straightforward:

// Before (deprecated HandlebarsPlanner)
// var planner = new HandlebarsPlanner(kernel);
// var plan = await planner.CreatePlanAsync(kernel, goal);
// await plan.InvokeAsync(kernel);

// After (Auto Function Calling)
var settings = new OpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
var result = await kernel.InvokePromptAsync(goal, new KernelArguments(settings));

Auto function calling is more reliable than planners in practice because it delegates the tool selection logic to the model rather than trying to synthesize a plan in advance. Models like GPT-4o and o-series are well-optimized for tool-use loops in a way that the older planner pattern couldn’t match.

From SK Single-Agent Loop to Agent Framework

When you need an explicit agent identity, thread management, or you’re building toward multi-agent coordination, migrate your manual SK loop to a ChatCompletionAgent:

// Before (manual SK loop)
// var chatHistory = new ChatHistory(systemPrompt);
// chatHistory.AddUserMessage(userQuery);
// var result = await kernel.InvokePromptAsync(userQuery, args);

// After (Agent Framework)
var agent = new ChatCompletionAgent { Kernel = kernel, Instructions = systemPrompt };
var thread = new ChatHistoryAgentThread();
await foreach (var msg in agent.InvokeAsync(new ChatMessageContent(AuthorRole.User, userQuery), thread))
{
    // process msg
}

The key benefit of this migration isn’t just syntax — it’s the thread abstraction. The ChatHistoryAgentThread maintains the conversation history, tool call records, and agent state in a way that supports checkpointing and multi-agent coordination. Your existing Kernel and all its plugins carry over unchanged.

Once you’ve wrapped your SK logic in a ChatCompletionAgent, composing it with other agents in an AgentGroupChat becomes straightforward. Each agent brings its own Kernel, instructions, and plugin set — MAF handles the coordination.

Long-Term Commitment Signals

A practical concern when adopting a framework: will it still be here in two years? For the Microsoft AI stack in .NET, the commitment signals are strong across all three layers:

FrameworkStatusCommitment Signal
MEAIGA (stable)Part of .NET 10 BCL — here forever
Semantic Kernel1.x stableMicrosoft’s primary AI SDK, broad investment
Agent FrameworkRC → GAOn SK 1.x roadmap, production deployments at Microsoft

MEAI being part of the .NET 10 Base Class Library is the strongest possible signal. It is not going away and it is not going to be deprecated in favor of something else. It’s in the BCL. The interfaces (IChatClient, IEmbeddingGenerator) are the stable surface you can build on with confidence.

Semantic Kernel at 1.x stable means the API surface is locked for breaking changes. The Microsoft.SemanticKernel package has been in active development since 2023 and is now used in production by Microsoft’s own products. The investment level is high.

Agent Framework reached Release Candidate status in early 2026. Microsoft has committed to a GA release on the SK 1.x roadmap. Production deployments exist at Microsoft before the GA announcement — that’s a meaningful signal that the RC is stable enough for real workloads. Check the SK GitHub repository releases page for the current status before adopting in a new production project.

The practical implication: building on MEAI interfaces today means your code is building on BCL-level infrastructure. Adding SK gives you Microsoft’s primary AI SDK investment. Adding MAF gives you the agentic layer that Microsoft itself uses internally. None of these are experimental bets — they are the Microsoft-backed path for .NET AI development in 2026.

Further Reading

⚠ Production Considerations

  • A common mistake is adopting Agent Framework for simple chatbot scenarios that only need SK's InvokePromptAsync loop. MAF's overhead (agent loop, message thread management) adds latency without benefit when you don't need autonomous multi-step reasoning.
  • MEAI's IChatClient is not the same as the raw Azure SDK's ChatClient. If you wire up IChatClient with UseOpenTelemetry() or UseFunctionInvocation() middleware in ChatClientBuilder, those layers sit between MEAI and SK — doubling telemetry or function invocation if SK also has filters registered. Configure telemetry at one layer only.

Enjoying this article?

Get weekly .NET + AI insights delivered to your inbox. No spam.

Subscribe Free →

🧠 Architect’s Note

Register MEAI's IChatClient at the host level and let both your direct service code and Semantic Kernel share the same configured client — including middleware like telemetry and resilience. This ensures consistent behavior across all AI calls in the application, whether they go through SK or directly through IChatClient.

AI-Friendly Summary

Summary

Microsoft.Extensions.AI (MEAI), Semantic Kernel (SK), and Agent Framework (MAF) form a three-layer stack — not competing frameworks. MEAI provides the IChatClient and IEmbeddingGenerator abstractions; SK adds plugins, filters, prompt templates, and chat history; MAF adds ChatCompletionAgent and multi-agent orchestration via AgentGroupChat. Choose the layer that matches your complexity: MEAI for simple AI calls or library code, SK for orchestrated features, MAF for autonomous agent workflows.

Key Takeaways

  • MEAI, SK, and MAF are a three-layer stack: foundation → orchestration → agents
  • MEAI's IChatClient is the provider abstraction — SK and MAF build on top of it
  • Add SK when you need plugins, auto function calling, or prompt templates
  • Add MAF when you need autonomous agent loops, multi-agent coordination, or checkpointing
  • Migration path: SK Planner → Auto Function Calling → ChatCompletionAgent

Implementation Checklist

  • Determine if you need simple AI calls (MEAI), orchestrated workflows (SK), or autonomous agents (MAF)
  • Register IChatClient via AddAzureOpenAIChatClient() or AddOllamaChatClient() in DI
  • Add AddKernel() and connectors if you need plugins or prompt templates
  • Wrap Kernel in ChatCompletionAgent if you need an agent loop
  • Use AgentGroupChat for multi-agent coordination with termination conditions
  • Check SK GitHub release notes for Agent Framework GA status before adopting in production

Frequently Asked Questions

Is Microsoft.Extensions.AI replacing Semantic Kernel?

No. MEAI is the foundation layer that SK builds on. Semantic Kernel uses MEAI's IChatClient and IEmbeddingGenerator abstractions internally. They are complementary, not competing — MEAI handles provider abstraction, SK adds orchestration, plugins, and chat history on top.

When should I use Microsoft.Extensions.AI directly instead of Semantic Kernel?

Use MEAI directly when you need simple chat or embedding calls, when you are building a library that should not impose an orchestration framework on consumers, or when you need maximum flexibility to swap AI providers. For anything requiring plugins, prompt templates, or multi-step workflows, add Semantic Kernel.

What is Microsoft Agent Framework and how does it relate to Semantic Kernel?

Microsoft Agent Framework (MAF) is the agent orchestration layer built on top of Semantic Kernel. It provides ChatCompletionAgent, AgentGroupChat for multi-agent coordination, and checkpointing for long-running agent workflows. MAF is part of the Microsoft.SemanticKernel.Agents.Core package.

Can I migrate from SK Planners to the Agent Framework?

Yes, and Microsoft recommends it. Planners were deprecated in SK 1.x in favor of Auto Function Calling and, for complex orchestration, the Agent Framework's ChatCompletionAgent with AgentGroupChat. The Agent Framework gives you explicit control over the agent loop that planners tried to automate.

Is Microsoft Agent Framework production-ready?

As of early 2026, Agent Framework has reached Release Candidate status and is stable for most production workloads. Microsoft has committed to a GA release on the Semantic Kernel 1.x timeline. Check the SK GitHub repository for the latest release notes.

How does MEAI's IChatClient work with dependency injection in ASP.NET Core?

Register IChatClient using builder.Services.AddAzureOpenAIChatClient(endpoint, apiKey) or builder.Services.AddOllamaChatClient(endpoint). Any class in your DI graph can then inject IChatClient directly. Semantic Kernel's AddKernel() also registers IChatClient-based connectors automatically.

What is the migration path from a single Semantic Kernel agent to Agent Framework?

Wrap your existing Kernel in a ChatCompletionAgent: new ChatCompletionAgent { Kernel = kernel, Name = 'MyAgent', Instructions = systemPrompt }. Then replace your manual InvokePromptAsync loop with agent.InvokeAsync(chatHistory). For multi-agent coordination, compose multiple agents in an AgentGroupChat.

Track your progress through this learning path.

You Might Also Enjoy

Was this article useful?

Feedback is anonymous and helps us improve content quality.

Discussion

Engineering discussion powered by GitHub Discussions.

#Microsoft.Extensions.AI #Semantic Kernel #Agent Framework #.NET AI #Architecture