What is Semantic Kernel?
Semantic Kernel (SK) is Microsoft’s open-source SDK that enables .NET developers to build AI-powered applications using large language models. Unlike wrapper libraries that simply call API endpoints, SK provides a structured orchestration layer — plugins, planners, memory, and a pipeline — that lets you build reliable, maintainable AI agents.
SK is the same technology foundation behind Microsoft 365 Copilot, which means it has been hardened in one of the largest production AI deployments on the planet.
The Kernel Object
The Kernel is the central orchestrator. Think of it as the IServiceProvider of the AI world — it holds references to AI services, plugins, and configuration, and it coordinates execution.
using Microsoft.SemanticKernel;
var builder = Kernel.CreateBuilder();
builder.AddAzureOpenAIChatCompletion(
deploymentName: "gpt-4o",
endpoint: "https://your-resource.openai.azure.com/",
apiKey: "your-api-key"
);
Kernel kernel = builder.Build();
Key design decisions:
- The Kernel is immutable after build — you configure it via the builder pattern
- It integrates with .NET dependency injection natively
- You can register multiple AI service connectors and select at runtime
Plugin Architecture
Plugins are the core abstraction for giving AI models access to your application’s capabilities. Each plugin contains one or more functions that the AI can discover and invoke.
Native Functions
Native functions are regular C# methods decorated with attributes:
using Microsoft.SemanticKernel;
using System.ComponentModel;
public class WeatherPlugin
{
[KernelFunction("get_weather")]
[Description("Gets the current weather for a given city")]
public async Task<string> GetWeatherAsync(
[Description("The city name")] string city)
{
// Your weather API integration here
return $"The weather in {city} is 72°F and sunny.";
}
}
Register it with the kernel:
kernel.Plugins.AddFromType<WeatherPlugin>();
Semantic Functions
Semantic functions are prompt templates that the kernel can execute like regular functions. They’re defined inline or from files:
var summarize = kernel.CreateFunctionFromPrompt(
"Summarize the following text in 3 bullet points: {{$input}}",
new OpenAIPromptExecutionSettings { MaxTokens = 200 }
);
Planning Strategies
Planners let the AI create multi-step execution plans from natural language goals. SK provides several planning strategies:
-
Function Calling — The recommended approach. Uses the AI model’s native function-calling capability to decide which plugins to invoke and in what order.
-
Handlebars Planner — Generates Handlebars templates as plans. Good for complex branching logic.
-
Stepwise Planner — Iteratively reasons through steps. Better for exploratory tasks.
// Using automatic function calling (recommended)
OpenAIPromptExecutionSettings settings = new()
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
var result = await kernel.InvokePromptAsync(
"What's the weather in Seattle and should I bring an umbrella?",
new(settings)
);
Memory and RAG Patterns
SK’s memory system enables Retrieval-Augmented Generation by connecting to vector stores:
using Microsoft.SemanticKernel.Memory;
// Register a memory connector
builder.AddAzureAISearchAsMemoryStore(
endpoint: "https://your-search.search.windows.net",
apiKey: "your-key"
);
// Store and recall information
await memory.SaveInformationAsync("docs",
id: "doc1",
text: "Semantic Kernel supports multiple AI connectors...");
var results = await memory.SearchAsync("docs", "What AI models are supported?")
.ToListAsync();
Pipeline Filters
SK supports middleware-like filters for cross-cutting concerns:
public class LoggingFilter : IFunctionInvocationFilter
{
public async Task OnFunctionInvocationAsync(
FunctionInvocationContext context,
Func<FunctionInvocationContext, Task> next)
{
Console.WriteLine($"Calling: {context.Function.Name}");
await next(context);
Console.WriteLine($"Result: {context.Result}");
}
}
builder.Services.AddSingleton<IFunctionInvocationFilter, LoggingFilter>();
Architecture Summary
The SK architecture follows a clean layered pattern:
- AI Services Layer — Connectors to Azure OpenAI, OpenAI, HuggingFace, local models
- Plugin Layer — Your business logic exposed as functions
- Planning Layer — Orchestration of functions into multi-step workflows
- Memory Layer — Vector storage and retrieval for RAG
- Pipeline Layer — Filters, logging, telemetry, error handling
This layered design means you can swap AI providers, add new plugins, or change planning strategies without rewriting your application logic. That’s the key insight — SK separates AI orchestration concerns from your business logic.