Skip to main content

Semantic Kernel .NET Setup: NuGet to Streaming Chat in 30 Minutes

Verified Apr 2026 Beginner Original .NET 10 Microsoft.SemanticKernel 1.71.0 Azure.AI.OpenAI 2.x
By Rajesh Mishra · Mar 10, 2026 · 14 min read
In 30 Seconds

Step-by-step workshop for setting up Semantic Kernel in .NET. Covers: creating a console project, installing NuGet packages, configuring the kernel with Azure OpenAI, making chat completions, building plugins with KernelFunction, enabling automatic function calling, and adding streaming responses.

What You'll Build

Your first Semantic Kernel app: NuGet install, Azure OpenAI config, streaming chat, plugins, and function calling. Working .NET 10 project in 30 minutes.

Microsoft.SemanticKernel 1.71.0Azure.AI.OpenAI 2.x .NET 10 · 14 min read to complete

What You’ll Build

By the end of this workshop, you’ll have a .NET console application that:

  1. Connects to Azure OpenAI (or OpenAI) through Semantic Kernel
  2. Sends chat completion requests and gets AI responses
  3. Has custom plugins that expose your C# code to the LLM
  4. Uses automatic function calling — the AI invokes your code when relevant
  5. Streams responses in real-time

Total time: about 20 minutes if you already have an Azure OpenAI resource.

Prerequisites

  • .NET 10 SDK installed (or .NET 8/.NET 9 for older projects)
  • Azure OpenAI resource with a current chat-model deployment — or an OpenAI API key
  • Visual Studio 2022, VS Code, or Rider

If you need to create an Azure OpenAI resource, do that first in the Azure Portal. Deploy your preferred chat model, then note the deployment name, endpoint URL, and API key.

Step 1: Create the Project

dotnet new console -n SemanticKernelDemo
cd SemanticKernelDemo

Step 2: Install the SDK

dotnet add package Microsoft.SemanticKernel

This pulls in the core SDK plus the Azure OpenAI connector. If you’re using OpenAI directly (not Azure), also add:

dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI

Step 3: Your First Chat Completion

Replace the contents of Program.cs:

using Microsoft.SemanticKernel;

// Build the kernel with your AI model
var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(
        deploymentName: "chat-deployment",
        endpoint: Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
        apiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
    .Build();

// Ask a question
var response = await kernel.InvokePromptAsync(
    "What are the three most important things to know about dependency injection in .NET?");

Console.WriteLine(response);

Set your environment variables and run:

# PowerShell
$env:AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
$env:AZURE_OPENAI_KEY = "your-api-key"

dotnet run

You should see a detailed response about dependency injection. The kernel handled the HTTP communication, request formatting, and response parsing — you wrote three lines of AI code.

Using OpenAI Instead of Azure OpenAI

If you’re using OpenAI directly:

var kernel = Kernel.CreateBuilder()
    .AddOpenAIChatCompletion(
        modelId: "gpt-5",
        apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!)
    .Build();

Everything else in this workshop works the same regardless of provider.

Step 4: Chat History — Multi-Turn Conversations

A single prompt is useful, but real applications need multi-turn conversations. SK provides ChatHistory for this:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion("chat-deployment",
        Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
        Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
    .Build();

var chatService = kernel.GetRequiredService<IChatCompletionService>();

var history = new ChatHistory();
history.AddSystemMessage(
    "You are a .NET development expert. Answer concisely with code examples when relevant.");

Console.WriteLine("Chat with the AI (.NET expert). Type 'exit' to quit.\n");

while (true)
{
    Console.Write("You: ");
    var input = Console.ReadLine();
    if (string.IsNullOrWhiteSpace(input) || input.Equals("exit", StringComparison.OrdinalIgnoreCase))
        break;

    history.AddUserMessage(input);

    var response = await chatService.GetChatMessageContentAsync(history);
    history.Add(response);

    Console.WriteLine($"\nAssistant: {response.Content}\n");
}

Now you have a persistent conversation — the AI remembers previous messages in the session.

Step 5: Build Your First Plugin

Plugins are what make Semantic Kernel powerful. They expose your C# code as tools the AI can call. Create a new file WeatherPlugin.cs:

using System.ComponentModel;
using Microsoft.SemanticKernel;

public class WeatherPlugin
{
    [KernelFunction("get_current_weather")]
    [Description("Get the current weather for a city")]
    public string GetCurrentWeather(
        [Description("The city name, e.g. 'Seattle'")] string city)
    {
        // In production, this would call a real weather API
        var weatherData = new Dictionary<string, (int Temp, string Condition)>
        {
            ["Seattle"] = (62, "Cloudy"),
            ["London"] = (55, "Rainy"),
            ["Tokyo"] = (78, "Sunny"),
            ["Sydney"] = (71, "Partly cloudy")
        };

        if (weatherData.TryGetValue(city, out var data))
            return $"{city}: {data.Temp}°F, {data.Condition}";

        return $"Weather data not available for {city}";
    }

    [KernelFunction("get_forecast")]
    [Description("Get the 3-day weather forecast for a city")]
    public string GetForecast(
        [Description("The city name")] string city)
    {
        return $"{city} 3-day forecast: Tomorrow 65°F Sunny, Day 2: 58°F Cloudy, Day 3: 61°F Partly cloudy";
    }
}

Two things to notice:

  1. [KernelFunction] — Marks the method as callable by the AI. The string parameter is the function name the LLM sees.
  2. [Description] — Tells the LLM what the function does and what each parameter means. These descriptions are critical — the AI uses them to decide when and how to call your function.

Step 6: Enable Automatic Function Calling

Update Program.cs to register the plugin and enable auto function calling:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;

// Build kernel with the weather plugin
var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion("chat-deployment",
        Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
        Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
    .Build();

kernel.Plugins.AddFromType<WeatherPlugin>();

// Enable automatic function calling
var settings = new AzureOpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

// Ask about weather — the AI will call our plugin automatically
var response = await kernel.InvokePromptAsync(
    "What's the weather like in Seattle and Tokyo right now? " +
    "Compare them and recommend which is better for outdoor activities.",
    new(settings));

Console.WriteLine(response);

When you run this, the following happens behind the scenes:

  1. SK sends your prompt to your configured chat model along with the tool definitions (generated from your [KernelFunction] and [Description] attributes)
  2. The model responds with tool call requests: “I want to call get_current_weather for Seattle and Tokyo”
  3. SK automatically invokes your GetCurrentWeather method for both cities
  4. SK sends the results back to the model
  5. The model creates a natural language response using the real data

The AI doesn’t hallucinate weather data — it uses the actual output from your C# methods.

Step 7: Streaming Responses

For user-facing applications, streaming provides a much better experience. Replace InvokePromptAsync with InvokePromptStreamingAsync:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;

var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion("chat-deployment",
        Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
        Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
    .Build();

kernel.Plugins.AddFromType<WeatherPlugin>();

var settings = new AzureOpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

Console.Write("Assistant: ");

await foreach (var chunk in kernel.InvokePromptStreamingAsync(
    "Give me a detailed weather report for Seattle with recommendations.",
    new(settings)))
{
    Console.Write(chunk);
}

Console.WriteLine();

Tokens arrive as they’re generated — the user sees the response build in real-time instead of waiting for the entire completion.

Step 8: Putting It All Together

Here’s a complete interactive chat with plugins and streaming:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;

var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion("chat-deployment",
        Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!,
        Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!)
    .Build();

kernel.Plugins.AddFromType<WeatherPlugin>();

var chatService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory("You are a helpful assistant with access to weather data.");
var settings = new AzureOpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

Console.WriteLine("Weather Assistant (type 'exit' to quit)\n");

while (true)
{
    Console.Write("You: ");
    var input = Console.ReadLine();
    if (string.IsNullOrWhiteSpace(input) || input.Equals("exit", StringComparison.OrdinalIgnoreCase))
        break;

    history.AddUserMessage(input);

    Console.Write("Assistant: ");

    var fullResponse = "";
    await foreach (var chunk in chatService.GetStreamingChatMessageContentsAsync(
        history, settings, kernel))
    {
        Console.Write(chunk.Content);
        fullResponse += chunk.Content;
    }

    history.AddAssistantMessage(fullResponse);
    Console.WriteLine("\n");
}

This gives you a streaming, multi-turn, tool-enabled AI chat — the pattern that powers most production SK applications.

Common Setup Issues

“Could not load type ‘Azure.AI.OpenAI.AzureOpenAIClient’” — Version mismatch. Make sure you’re using Microsoft.SemanticKernel 1.71+ which aligns with Azure.AI.OpenAI 2.1+. Run dotnet restore after updating.

“401 Unauthorized” — Your API key is wrong or expired. Double-check the key in the Azure Portal under your resource’s Keys and Endpoint section. See Fix 401 Unauthorized Azure OpenAI Errors.

“DeploymentNotFound” — You’re using the model name instead of the deployment name. In Azure OpenAI, these are different. The deployment name is what you specified when creating the deployment, not the underlying model name. See Fix Model Not Found Errors.

“FunctionChoiceBehavior has no effect” — Make sure you’re passing the settings to the invocation. The settings object must be provided as new KernelArguments(settings) to InvokePromptAsync, or as the executionSettings parameter to GetChatMessageContentsAsync.

What’s Next

You now have a working Semantic Kernel application with chat, plugins, and streaming. From here:

⚠ Production Considerations

  • Never hardcode API keys in source code. Use environment variables, user secrets (development), or Azure Key Vault (production). The workshop uses environment variables for clarity.
  • Don't skip the Description attribute on KernelFunction methods. Without it, the LLM can't determine when to call your function — it relies on the description text for tool selection.

Enjoying this article?

Get weekly .NET + AI insights delivered to your inbox. No spam.

Subscribe Free →

🧠 Architect’s Note

This workshop produces a console app. For production, register the kernel builder in DI and inject it into your services. The coding patterns here — plugins, function calling, streaming — transfer directly to ASP.NET Core, Blazor, and Worker Service applications.

AI-Friendly Summary

Summary

Step-by-step workshop for setting up Semantic Kernel in .NET. Covers: creating a console project, installing NuGet packages, configuring the kernel with Azure OpenAI, making chat completions, building plugins with KernelFunction, enabling automatic function calling, and adding streaming responses.

Key Takeaways

  • Install Microsoft.SemanticKernel NuGet package to get started
  • Create a Kernel with AddAzureOpenAIChatCompletion and build it
  • Use [KernelFunction] and [Description] attributes to create plugins
  • Enable FunctionChoiceBehavior.Auto() for automatic tool invocation
  • Use InvokePromptStreamingAsync for real-time streaming responses

Implementation Checklist

  • Create a new .NET console project
  • Install Microsoft.SemanticKernel NuGet package
  • Configure kernel with Azure OpenAI or OpenAI credentials
  • Make a basic chat completion call
  • Create a plugin class with KernelFunction methods
  • Register the plugin and enable auto function calling
  • Add streaming for real-time output

Frequently Asked Questions

What do I need to start using Semantic Kernel?

.NET 8 or later, an Azure OpenAI resource (or OpenAI API key), and the Microsoft.SemanticKernel NuGet package. That's it — no Python, no separate runtime, no complex toolchain.

Can I use Semantic Kernel with a local model instead of Azure OpenAI?

Yes. Semantic Kernel supports Ollama and any provider implementing IChatClient from Microsoft.Extensions.AI. For local development, you can use Ollama with llama3 or phi-3. Replace AddAzureOpenAIChatCompletion with AddOllamaChatCompletion.

Does Semantic Kernel work with .NET 8?

Yes. Semantic Kernel supports .NET 8, .NET 9, and .NET 10. The examples in this workshop use .NET 10, but the core APIs remain compatible with earlier supported runtimes.

How do I add streaming to Semantic Kernel?

Use InvokePromptStreamingAsync instead of InvokePromptAsync. The method returns an IAsyncEnumerable<StreamingKernelContent> that you can iterate with await foreach.

You Might Also Enjoy

Was this article useful?

Feedback is anonymous and helps us improve content quality.

Discussion

Engineering discussion powered by GitHub Discussions.

#Semantic Kernel #.NET AI #Azure OpenAI #Workshop #Getting Started