What You’ll Build
By the end of this workshop, you’ll have a .NET console application that:
- Connects to Azure OpenAI (or OpenAI) through Semantic Kernel
- Sends chat completion requests and gets AI responses
- Has custom plugins that expose your C# code to the LLM
- Uses automatic function calling — the AI invokes your code when relevant
- Streams responses in real-time
Total time: about 20 minutes if you already have an Azure OpenAI resource.
Prerequisites
- .NET 10 SDK installed (or .NET 8/.NET 9 for older projects)
- Azure OpenAI resource with a current chat-model deployment — or an OpenAI API key
- Visual Studio 2022, VS Code, or Rider
If you need to create an Azure OpenAI resource, do that first in the Azure Portal. Deploy your preferred chat model, then note the deployment name, endpoint URL, and API key.
Step 1: Create the Project
dotnet new console -n SemanticKernelDemo
cd SemanticKernelDemo
Step 2: Install the SDK
dotnet add package Microsoft.SemanticKernel
This pulls in the core SDK plus the Azure OpenAI connector. If you’re using OpenAI directly (not Azure), also add:
dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI
Step 3: Your First Chat Completion
Replace the contents of Program.cs:
using Microsoft.SemanticKernel;
// Build the kernel with your AI model
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
deploymentName: "chat-deployment",
endpoint: Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
apiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
.Build();
// Ask a question
var response = await kernel.InvokePromptAsync(
"What are the three most important things to know about dependency injection in .NET?");
Console.WriteLine(response);
Set your environment variables and run:
# PowerShell
$env:AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
$env:AZURE_OPENAI_KEY = "your-api-key"
dotnet run
You should see a detailed response about dependency injection. The kernel handled the HTTP communication, request formatting, and response parsing — you wrote three lines of AI code.
Using OpenAI Instead of Azure OpenAI
If you’re using OpenAI directly:
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion(
modelId: "gpt-5",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!)
.Build();
Everything else in this workshop works the same regardless of provider.
Step 4: Chat History — Multi-Turn Conversations
A single prompt is useful, but real applications need multi-turn conversations. SK provides ChatHistory for this:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion("chat-deployment",
Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
.Build();
var chatService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
history.AddSystemMessage(
"You are a .NET development expert. Answer concisely with code examples when relevant.");
Console.WriteLine("Chat with the AI (.NET expert). Type 'exit' to quit.\n");
while (true)
{
Console.Write("You: ");
var input = Console.ReadLine();
if (string.IsNullOrWhiteSpace(input) || input.Equals("exit", StringComparison.OrdinalIgnoreCase))
break;
history.AddUserMessage(input);
var response = await chatService.GetChatMessageContentAsync(history);
history.Add(response);
Console.WriteLine($"\nAssistant: {response.Content}\n");
}
Now you have a persistent conversation — the AI remembers previous messages in the session.
Step 5: Build Your First Plugin
Plugins are what make Semantic Kernel powerful. They expose your C# code as tools the AI can call. Create a new file WeatherPlugin.cs:
using System.ComponentModel;
using Microsoft.SemanticKernel;
public class WeatherPlugin
{
[KernelFunction("get_current_weather")]
[Description("Get the current weather for a city")]
public string GetCurrentWeather(
[Description("The city name, e.g. 'Seattle'")] string city)
{
// In production, this would call a real weather API
var weatherData = new Dictionary<string, (int Temp, string Condition)>
{
["Seattle"] = (62, "Cloudy"),
["London"] = (55, "Rainy"),
["Tokyo"] = (78, "Sunny"),
["Sydney"] = (71, "Partly cloudy")
};
if (weatherData.TryGetValue(city, out var data))
return $"{city}: {data.Temp}°F, {data.Condition}";
return $"Weather data not available for {city}";
}
[KernelFunction("get_forecast")]
[Description("Get the 3-day weather forecast for a city")]
public string GetForecast(
[Description("The city name")] string city)
{
return $"{city} 3-day forecast: Tomorrow 65°F Sunny, Day 2: 58°F Cloudy, Day 3: 61°F Partly cloudy";
}
}
Two things to notice:
[KernelFunction]— Marks the method as callable by the AI. The string parameter is the function name the LLM sees.[Description]— Tells the LLM what the function does and what each parameter means. These descriptions are critical — the AI uses them to decide when and how to call your function.
Step 6: Enable Automatic Function Calling
Update Program.cs to register the plugin and enable auto function calling:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;
// Build kernel with the weather plugin
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion("chat-deployment",
Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
.Build();
kernel.Plugins.AddFromType<WeatherPlugin>();
// Enable automatic function calling
var settings = new AzureOpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
// Ask about weather — the AI will call our plugin automatically
var response = await kernel.InvokePromptAsync(
"What's the weather like in Seattle and Tokyo right now? " +
"Compare them and recommend which is better for outdoor activities.",
new(settings));
Console.WriteLine(response);
When you run this, the following happens behind the scenes:
- SK sends your prompt to your configured chat model along with the tool definitions (generated from your
[KernelFunction]and[Description]attributes) - The model responds with tool call requests: “I want to call
get_current_weatherfor Seattle and Tokyo” - SK automatically invokes your
GetCurrentWeathermethod for both cities - SK sends the results back to the model
- The model creates a natural language response using the real data
The AI doesn’t hallucinate weather data — it uses the actual output from your C# methods.
Step 7: Streaming Responses
For user-facing applications, streaming provides a much better experience. Replace InvokePromptAsync with InvokePromptStreamingAsync:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion("chat-deployment",
Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
.Build();
kernel.Plugins.AddFromType<WeatherPlugin>();
var settings = new AzureOpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
Console.Write("Assistant: ");
await foreach (var chunk in kernel.InvokePromptStreamingAsync(
"Give me a detailed weather report for Seattle with recommendations.",
new(settings)))
{
Console.Write(chunk);
}
Console.WriteLine();
Tokens arrive as they’re generated — the user sees the response build in real-time instead of waiting for the entire completion.
Step 8: Putting It All Together
Here’s a complete interactive chat with plugins and streaming:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion("chat-deployment",
Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
.Build();
kernel.Plugins.AddFromType<WeatherPlugin>();
var chatService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory("You are a helpful assistant with access to weather data.");
var settings = new AzureOpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};
Console.WriteLine("Weather Assistant (type 'exit' to quit)\n");
while (true)
{
Console.Write("You: ");
var input = Console.ReadLine();
if (string.IsNullOrWhiteSpace(input) || input.Equals("exit", StringComparison.OrdinalIgnoreCase))
break;
history.AddUserMessage(input);
Console.Write("Assistant: ");
var fullResponse = "";
await foreach (var chunk in chatService.GetStreamingChatMessageContentsAsync(
history, settings, kernel))
{
Console.Write(chunk.Content);
fullResponse += chunk.Content;
}
history.AddAssistantMessage(fullResponse);
Console.WriteLine("\n");
}
This gives you a streaming, multi-turn, tool-enabled AI chat — the pattern that powers most production SK applications.
Common Setup Issues
“Could not load type ‘Azure.AI.OpenAI.AzureOpenAIClient’” — Version mismatch. Make sure you’re using Microsoft.SemanticKernel 1.71+ which aligns with Azure.AI.OpenAI 2.1+. Run dotnet restore after updating.
“401 Unauthorized” — Your API key is wrong or expired. Double-check the key in the Azure Portal under your resource’s Keys and Endpoint section. See Fix 401 Unauthorized Azure OpenAI Errors.
“DeploymentNotFound” — You’re using the model name instead of the deployment name. In Azure OpenAI, these are different. The deployment name is what you specified when creating the deployment, not the underlying model name. See Fix Model Not Found Errors.
“FunctionChoiceBehavior has no effect” — Make sure you’re passing the settings to the invocation. The settings object must be provided as new KernelArguments(settings) to InvokePromptAsync, or as the executionSettings parameter to GetChatMessageContentsAsync.
What’s Next
You now have a working Semantic Kernel application with chat, plugins, and streaming. From here:
- What is Semantic Kernel? — Understand the architecture and where SK fits in the Microsoft AI stack
- Semantic Kernel Plugins: Build Reusable AI Tools — Go deeper on the plugin system
- Semantic Kernel Memory and Vector Stores — Add persistent memory to your application
- Build a RAG Chatbot with Semantic Kernel — Full RAG implementation workshop