What Semantic Kernel Actually Is
Semantic Kernel is Microsoft’s open-source SDK for building AI-powered applications in .NET. It provides the glue between your C# code and AI models — handling the orchestration, tool invocation, and memory management that every AI application needs.
In practical terms, when you want your .NET application to chat with GPT-5 or another current production model, call functions based on user intent, remember context across conversations, or orchestrate multi-step AI workflows, Semantic Kernel is the SDK that makes this happen.
Microsoft built it because .NET developers needed a first-class AI orchestration library that works with their existing tools — Visual Studio, NuGet, dependency injection, async/await. Before SK, integrating AI into .NET meant either using Python libraries through interop or writing raw HTTP calls to OpenAI APIs. Neither approach was acceptable for production C# codebases.
Where Semantic Kernel Fits in 2026
The Microsoft AI stack has a clear hierarchy, and understanding where SK sits prevents confusion about which tool to use when:
Microsoft.Extensions.AI defines the interfaces — IChatClient, IEmbeddingGenerator. Every AI provider implements these. You can swap Azure OpenAI for Ollama in one line because both implement the same interface.
Semantic Kernel uses those interfaces and adds orchestration on top — plugins that expose your C# methods to the LLM, memory that persists context across conversations, and planning that chains multiple steps together.
Microsoft Agent Framework extends SK for autonomous agents — multi-agent orchestration, checkpointing, human-in-the-loop, MCP integration. If you need agents that operate independently, delegate tasks to other agents, or manage long-running workflows, Agent Framework is the layer to add.
The key insight: SK is not being replaced by Agent Framework. They serve different purposes. Most .NET AI applications — chat features, RAG implementations, function calling, content generation — work perfectly with SK alone. Agent Framework is specifically for agentic patterns. You only need it when you need agents.
The Four Core Concepts
1. The Kernel
The kernel is the central hub. It holds your AI model connections, registered plugins, and configuration. Every AI operation flows through the kernel:
using Microsoft.SemanticKernel;
var builder = Kernel.CreateBuilder();
builder.AddAzureOpenAIChatCompletion(
deploymentName: "chat-deployment",
endpoint: "https://your-resource.openai.azure.com/",
apiKey: "your-api-key");
var kernel = builder.Build();
Once built, you can invoke the kernel for chat completions, function calling, or multi-step plans. Think of it as the DI container for your AI capabilities.
2. Plugins
Plugins are how you give the LLM access to your code. Any C# method can become a plugin by adding the [KernelFunction] attribute:
public class OrderPlugin
{
private readonly IOrderService _orderService;
public OrderPlugin(IOrderService orderService)
{
_orderService = orderService;
}
[KernelFunction("get_order_status")]
[Description("Get the current status of a customer order by order ID")]
public async Task<string> GetOrderStatusAsync(int orderId)
{
var order = await _orderService.GetAsync(orderId);
return $"Order {orderId}: {order.Status}, shipped {order.ShipDate:d}";
}
[KernelFunction("cancel_order")]
[Description("Cancel an existing order if it hasn't shipped yet")]
public async Task<string> CancelOrderAsync(int orderId)
{
var result = await _orderService.CancelAsync(orderId);
return result ? $"Order {orderId} cancelled." : $"Order {orderId} cannot be cancelled — already shipped.";
}
}
Register the plugin with the kernel, enable automatic function calling, and the LLM will invoke your methods when the user’s intent matches:
builder.Plugins.AddFromType<OrderPlugin>();
var kernel = builder.Build();
var settings = new OpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
var result = await kernel.InvokePromptAsync(
"What's the status of order 12345?", new(settings));
The LLM sees the function descriptions, determines that get_order_status is relevant, calls it with orderId: 12345, and incorporates the result into its response. Your users get answers grounded in real data, not hallucinated responses.
3. Memory
Memory lets your AI application remember things across conversations. SK supports both short-term (chat history) and long-term (vector store) memory:
Chat History — Keep recent conversation context:
var chatHistory = new ChatHistory();
chatHistory.AddSystemMessage("You are a helpful .NET development assistant.");
chatHistory.AddUserMessage("What's new in .NET 9?");
// ... response
chatHistory.AddUserMessage("What about performance improvements?");
// The AI remembers the previous question
Vector Store Memory — Query your own documents:
var memoryBuilder = new MemoryBuilder();
memoryBuilder.WithAzureOpenAITextEmbeddingGeneration("text-embedding-3-small", endpoint, apiKey);
memoryBuilder.WithMemoryStore(new AzureAISearchMemoryStore(searchEndpoint, searchKey));
var memory = memoryBuilder.Build();
// Store a document
await memory.SaveInformationAsync("docs", "Semantic Kernel supports .NET 8+", "doc-1");
// Query later
var results = await memory.SearchAsync("docs", "what .NET versions work?");
4. Functions and Planning
Functions are the callable units in SK. They come in two types:
- Native Functions — Your C# code wrapped as plugins (shown above)
- Prompt Functions — Templated prompts that SK executes against the LLM
// Prompt function defined inline
var summarize = kernel.CreateFunctionFromPrompt(
"Summarize the following text in 3 bullet points:\n\n{{$input}}");
var summary = await kernel.InvokeAsync(summarize,
new() { ["input"] = longArticleText });
Planning combines multiple functions into a sequence. SK’s planner analyzes the user’s goal, identifies which functions to call, determines the order, and executes the plan:
var settings = new OpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
// The LLM may call multiple plugins in sequence to answer
var result = await kernel.InvokePromptAsync(
"Find order 12345, check if it's shippable, and generate a shipping label.",
new(settings));
Semantic Kernel vs Other Options
SK vs LangChain
LangChain is Python-first. If your team writes C# and deploys to Azure, LangChain means adding Python infrastructure, interop complexity, and a different deployment model. Semantic Kernel gives you the same capabilities — plugins, memory, chains — with native .NET support, NuGet distribution, and DI integration.
For a detailed comparison, see Semantic Kernel vs LangChain: The .NET Developer’s Guide.
SK vs Direct API Calls
You can call OpenAI’s API directly with HttpClient. For simple chat completion, that’s fine. But the moment you need function calling with automatic tool invocation, conversation memory, or multi-step workflows, you’re rebuilding what SK already provides. SK handles the orchestration loop — tool schema generation, response parsing, function dispatch, result injection — so you focus on your business logic.
SK vs Microsoft.Extensions.AI
They’re complementary, not competing. Microsoft.Extensions.AI defines the provider-agnostic interfaces (IChatClient, IEmbeddingGenerator). SK builds on top of those interfaces to add plugins, memory, and planning. Use Microsoft.Extensions.AI if you only need raw chat completion with DI support. Use SK when you need orchestration.
Getting Started
Install the SDK:
dotnet add package Microsoft.SemanticKernel
For Azure OpenAI connectivity:
dotnet add package Microsoft.SemanticKernel.Connectors.AzureOpenAI
Build your first kernel and make a chat completion call:
using Microsoft.SemanticKernel;
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
deploymentName: "chat-deployment",
endpoint: Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
apiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
.Build();
var response = await kernel.InvokePromptAsync("Explain dependency injection in 2 sentences.");
Console.WriteLine(response);
That’s it. From here, you add plugins for your domain, connect memory for persistent context, and build the AI features your application needs.
Next Steps
- Getting Started with Semantic Kernel in .NET: Step-by-Step Setup — Hands-on workshop from NuGet install to first streaming response
- Semantic Kernel Architecture Deep Dive — Internal architecture, pipeline model, and advanced patterns
- Semantic Kernel Plugins: Build Reusable AI Tools in C# — Master the plugin system
- Migrate from Semantic Kernel to Microsoft Agent Framework — When and how to add Agent Framework
- University: Microsoft.Extensions.AI vs Semantic Kernel vs Agent Framework — Decision guide for choosing the right layer
- University: Dependency Injection for AI Services in ASP.NET Core — Multiple providers, keyed services, and unit test mocking