Skip to main content

Build a Multi-Agent System in .NET with Microsoft Agent Framework

Verified Apr 2026 Advanced Original .NET 10 Microsoft.SemanticKernel.Agents.Core 1.x Microsoft.SemanticKernel 1.54.0
By Rajesh Mishra · Mar 12, 2026 · 16 min read
In 30 Seconds

Workshop for building a multi-agent pipeline in .NET using Microsoft Agent Framework. Implements a three-agent system (Researcher, Analyst, Writer) with sequential execution, group chat, and handoff patterns. Includes OpenTelemetry tracing for production observability.

What You'll Build

Three-agent pipeline: sequential, parallel, and handoff patterns with checkpointing. Microsoft Agent Framework + Semantic Kernel. Full .NET source code.

Microsoft.SemanticKernel.Agents.Core 1.xMicrosoft.SemanticKernel 1.54.0 .NET 10 · 16 min read to complete

What You’ll Build

A three-agent research pipeline:

  1. Researcher — Searches knowledge bases and returns raw findings
  2. Analyst — Processes research data, extracts key metrics, identifies patterns
  3. Writer — Takes analyzed findings and produces a polished report

You’ll wire these together in three patterns: sequential (fixed order), group chat (collaborative), and handoff (dynamic delegation).

Prerequisites

  • .NET 10 SDK
  • Azure OpenAI resource with at least one chat deployment and one lower-cost worker deployment
  • Basic familiarity with Semantic Kernel and AI agent concepts

Step 1: Project Setup

dotnet new console -n MultiAgentPipeline
cd MultiAgentPipeline
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Agents.Core
dotnet add package OpenTelemetry.Exporter.Console
dotnet add package OpenTelemetry.Extensions.Hosting

Step 2: Define Agent Tools

Each agent gets exclusive tools.

ResearchTools.cs

using System.ComponentModel;
using Microsoft.SemanticKernel;

public class ResearchTools
{
    private readonly Dictionary<string, string> _knowledgeBase = new()
    {
        ["cloud-spending"] = "Global cloud infrastructure spending reached $84B in Q1 2026, " +
            "up 22% year-over-year. AWS leads at 31%, Azure at 25%, GCP at 12%.",
        ["ai-adoption"] = "Enterprise AI adoption hit 72% in 2026. Top use cases: " +
            "customer service (45%), code generation (38%), data analysis (35%).",
        ["dotnet-market"] = ".NET developer population reached 8.2M in 2026. " +
            "C# ranks #5 on TIOBE. ASP.NET Core serves 14% of web traffic globally.",
        ["ai-infrastructure"] = "GPU spending by hyperscalers up 64% in 2026. " +
            "Inference costs dropped 40% due to model distillation and quantization."
    };

    [KernelFunction("search_knowledge")]
    [Description("Search the knowledge base for information on a topic. Returns raw research text.")]
    public string SearchKnowledge(
        [Description("Topic keyword: cloud-spending, ai-adoption, dotnet-market, or ai-infrastructure")]
        string topic)
    {
        return _knowledgeBase.TryGetValue(topic.ToLowerInvariant(), out var content)
            ? content
            : $"No data found for topic '{topic}'. Available: {string.Join(", ", _knowledgeBase.Keys)}";
    }

    [KernelFunction("list_topics")]
    [Description("List all available research topics in the knowledge base")]
    public string ListTopics() =>
        $"Available topics: {string.Join(", ", _knowledgeBase.Keys)}";
}

AnalysisTools.cs

using System.ComponentModel;
using System.Text.RegularExpressions;
using Microsoft.SemanticKernel;

public partial class AnalysisTools
{
    [KernelFunction("extract_metrics")]
    [Description("Extract numerical metrics from research text. Returns a structured list of metric-value pairs.")]
    public string ExtractMetrics(
        [Description("Research text to analyze for metrics")] string text)
    {
        var numbers = NumberPattern().Matches(text);
        if (numbers.Count == 0)
            return "No numerical metrics found in the provided text.";

        var metrics = numbers.Select((m, i) =>
        {
            // Get surrounding context (10 chars each side)
            var start = Math.Max(0, m.Index - 30);
            var end = Math.Min(text.Length, m.Index + m.Length + 30);
            var context = text[start..end].Replace("\n", " ").Trim();
            return $"  {i + 1}. {m.Value}\"{context}\"";
        });

        return $"Extracted {numbers.Count} metrics:\n{string.Join("\n", metrics)}";
    }

    [KernelFunction("compare_metrics")]
    [Description("Compare two values and compute percentage difference")]
    public string CompareMetrics(
        [Description("First value")] double value1,
        [Description("Second value")] double value2,
        [Description("Label for this comparison")] string label)
    {
        var diff = value2 - value1;
        var pctChange = value1 != 0 ? (diff / value1) * 100 : 0;
        var direction = pctChange >= 0 ? "increase" : "decrease";

        return $"{label}: {value1}{value2} ({Math.Abs(pctChange):F1}% {direction})";
    }

    [GeneratedRegex(@"\d+\.?\d*[%BMK]?", RegexOptions.Compiled)]
    private static partial Regex NumberPattern();
}

Step 3: Create the Agents

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Agents;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;

var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")
    ?? throw new InvalidOperationException("Set AZURE_OPENAI_ENDPOINT");
var key = Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")
    ?? throw new InvalidOperationException("Set AZURE_OPENAI_KEY");

// Researcher — has search tools, uses a lower-cost deployment for cost efficiency
var researchKernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion("research-mini", endpoint, key)
    .Build();
researchKernel.Plugins.AddFromObject(new ResearchTools(), "research");

var researcher = new ChatCompletionAgent
{
    Name = "Researcher",
    Instructions = """
        You are a research specialist. Your job is to find relevant data.
        
        When given a research question:
        1. List available topics
        2. Search all relevant topics
        3. Present raw findings — do NOT analyze or interpret
        
        Always cite which topic each piece of data came from.
        When done, say "RESEARCH COMPLETE" at the end of your response.
        """,
    Kernel = researchKernel
};

// Analyst — has analysis tools, uses the primary chat deployment for reasoning
var analysisKernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion("analysis-prod", endpoint, key)
    .Build();
analysisKernel.Plugins.AddFromObject(new AnalysisTools(), "analysis");

var analyst = new ChatCompletionAgent
{
    Name = "Analyst",
    Instructions = """
        You are a data analyst. You work with research findings.
        
        When given research data:
        1. Extract key metrics using your tools
        2. Identify trends and patterns
        3. Highlight the 3 most significant findings
        
        Always ground your analysis in specific numbers.
        Do NOT write a report — just provide structured analysis.
        When done, say "ANALYSIS COMPLETE" at the end of your response.
        """,
    Kernel = analysisKernel
};

// Writer — no special tools, uses a writing-optimized deployment
var writerKernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion("writer-prod", endpoint, key)
    .Build();

var writer = new ChatCompletionAgent
{
    Name = "Writer",
    Instructions = """
        You are a technical writer who creates executive-ready reports.
        
        When given analyzed research:
        1. Write a clear, concise report with an executive summary
        2. Use specific numbers and comparisons from the analysis
        3. Include 2-3 forward-looking recommendations
        4. Keep it under 500 words
        
        Make it professional but readable.
        When done, say "REPORT COMPLETE" at the end of your response.
        """,
    Kernel = writerKernel
};

Step 4: Sequential Execution Pattern

The simplest multi-agent pattern. Each agent takes one turn in order:

using Microsoft.SemanticKernel.Agents;
using Microsoft.SemanticKernel.Agents.Chat;

// Sequential: Researcher → Analyst → Writer
var sequentialChat = new AgentGroupChat(researcher, analyst, writer)
{
    ExecutionSettings = new()
    {
        SelectionStrategy = new SequentialSelectionStrategy(),
        TerminationStrategy = new MaxMessageTermination(maxMessages: 12)
    }
};

// Kick off the pipeline
sequentialChat.AddChatMessage(new ChatMessageContent(
    AuthorRole.User,
    "Create a brief report on the current state of AI adoption in enterprise and " +
    "the .NET ecosystem's position in this landscape."));

Console.WriteLine("=== Sequential Pipeline ===\n");

await foreach (var message in sequentialChat.InvokeAsync())
{
    Console.WriteLine($"--- [{message.AuthorName}] ---");
    Console.WriteLine(message.Content);
    Console.WriteLine();
}

How it flows:

  1. Researcher searches all relevant topics, returns raw data
  2. Analyst extracts metrics, finds patterns
  3. Writer produces the final report

Each agent sees the messages from all previous agents. The Analyst sees Researcher’s findings. The Writer sees both.

Step 5: Group Chat with Custom Routing

For more dynamic collaboration, agents take turns based on content analysis:

public class ContentBasedSelectionStrategy : SelectionStrategy
{
    protected override Task<Agent> SelectAgentAsync(
        IReadOnlyList<Agent> agents,
        IReadOnlyList<ChatMessageContent> history,
        CancellationToken cancellationToken = default)
    {
        var lastMessage = history.LastOrDefault()?.Content ?? "";

        Agent selected;
        if (lastMessage.Contains("RESEARCH COMPLETE", StringComparison.OrdinalIgnoreCase))
        {
            selected = agents.First(a => a.Name == "Analyst");
        }
        else if (lastMessage.Contains("ANALYSIS COMPLETE", StringComparison.OrdinalIgnoreCase))
        {
            selected = agents.First(a => a.Name == "Writer");
        }
        else if (lastMessage.Contains("REPORT COMPLETE", StringComparison.OrdinalIgnoreCase))
        {
            // Pipeline is done — but termination strategy handles stopping
            selected = agents.First(a => a.Name == "Writer");
        }
        else
        {
            // Default: Researcher goes first
            selected = agents.First(a => a.Name == "Researcher");
        }

        return Task.FromResult(selected);
    }
}

// Keyword-based termination
public class KeywordTerminationStrategy : TerminationStrategy
{
    private readonly string _keyword;

    public KeywordTerminationStrategy(string keyword) => _keyword = keyword;

    protected override Task<bool> ShouldAgentTerminateAsync(
        Agent agent,
        IReadOnlyList<ChatMessageContent> history,
        CancellationToken cancellationToken = default)
    {
        var last = history.LastOrDefault()?.Content ?? "";
        return Task.FromResult(last.Contains(_keyword, StringComparison.OrdinalIgnoreCase));
    }
}

Wire it up:

var groupChat = new AgentGroupChat(researcher, analyst, writer)
{
    ExecutionSettings = new()
    {
        SelectionStrategy = new ContentBasedSelectionStrategy(),
        TerminationStrategy = new KeywordTerminationStrategy("REPORT COMPLETE")
    }
};

Step 6: Add OpenTelemetry Tracing

Production multi-agent systems need tracing. You must know which agent said what, which tools were called, and how long each step took.

using OpenTelemetry;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;

var tracerProvider = Sdk.CreateTracerProviderBuilder()
    .AddSource("Microsoft.SemanticKernel*")
    .SetResourceBuilder(ResourceBuilder.CreateDefault()
        .AddService("MultiAgentPipeline"))
    .AddConsoleExporter()
    .Build();

For production, replace AddConsoleExporter() with AddOtlpExporter() pointing to your collector (Aspire Dashboard, Jaeger, or Azure Monitor):

.AddOtlpExporter(opts =>
{
    opts.Endpoint = new Uri("http://localhost:4317");
})

Each trace shows:

  • Agent selection decisions
  • LLM calls per agent (model, tokens, latency)
  • Tool invocations (arguments, results)
  • Total pipeline duration

Step 7: Error Handling and Resilience

Multi-agent pipelines need guardrails:

try
{
    var messageCount = 0;
    await foreach (var message in groupChat.InvokeAsync())
    {
        messageCount++;
        Console.WriteLine($"[{message.AuthorName}]: {message.Content?[..Math.Min(200, message.Content.Length)]}...");

        // Safety valve — cost ceiling
        if (messageCount > 20)
        {
            Console.WriteLine("WARNING: Pipeline exceeded 20 messages. Stopping.");
            break;
        }
    }
}
catch (HttpRequestException ex) when (ex.StatusCode == System.Net.HttpStatusCode.TooManyRequests)
{
    Console.WriteLine("Rate limited by Azure OpenAI. Implement retry with backoff.");
}

Running the Pipeline

export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_KEY="your-key"
dotnet run

Sample output:

=== Sequential Pipeline ===

--- [Researcher] ---
I searched all available topics. Here are the raw findings:

From ai-adoption: Enterprise AI adoption hit 72% in 2026. Top use cases: 
customer service (45%), code generation (38%), data analysis (35%).

From dotnet-market: .NET developer population reached 8.2M in 2026.
C# ranks #5 on TIOBE. ASP.NET Core serves 14% of web traffic globally.
...
RESEARCH COMPLETE

--- [Analyst] ---
Key metrics extracted:
  1. 72% — Enterprise AI adoption rate
  2. 8.2M — .NET developer population
  3. 45% — Customer service AI usage
...
ANALYSIS COMPLETE

--- [Writer] ---
# AI Enterprise Adoption & .NET Position — Q1 2026

## Executive Summary
Enterprise AI adoption has reached 72%, establishing artificial intelligence as...
...
REPORT COMPLETE

What to Try Next

  • Add a Reviewer agent that checks the Writer’s output for accuracy
  • Implement parallel execution — Researcher searches multiple topics simultaneously
  • Store intermediate results in Azure Cosmos DB for checkpointing

⚠ Production Considerations

  • Multi-agent loops are the biggest risk. Always set MaxMessages termination (start at 20) and add a cost ceiling. One runaway loop can burn through your entire monthly LLM budget in minutes.
  • Agent role overlap causes confusion. If both Researcher and Analyst can search, the orchestrator can't route correctly. Each agent should have exclusive tool domains.

Enjoying this article?

Get weekly .NET + AI insights delivered to your inbox. No spam.

Subscribe Free →

🧠 Architect’s Note

Multi-agent systems are distributed systems that happen to use LLMs. Apply the same rigor: tracing, timeouts, circuit breakers, and idempotent operations. The LLM is the least reliable component — build for its failure modes.

AI-Friendly Summary

Summary

Workshop for building a multi-agent pipeline in .NET using Microsoft Agent Framework. Implements a three-agent system (Researcher, Analyst, Writer) with sequential execution, group chat, and handoff patterns. Includes OpenTelemetry tracing for production observability.

Key Takeaways

  • Three agents: Researcher (search tools), Analyst (data tools), Writer (output formatting)
  • Sequential pattern: agents take turns in defined order
  • Group chat pattern: agents collaborate with selection strategy
  • Handoff pattern: agents delegate to each other dynamically
  • OpenTelemetry traces every agent turn and tool call

Implementation Checklist

  • Create separate Kernel instances per agent with specialized tools
  • Define system prompts that prevent agents from exceeding their role
  • Choose execution pattern (sequential, group chat, handoff)
  • Set termination strategy to prevent infinite loops
  • Add OpenTelemetry tracing before deploying

Frequently Asked Questions

When should I use multi-agent instead of a single agent?

Multi-agent systems are warranted when tasks require fundamentally different capabilities (searching vs. coding vs. reviewing), when agents need to check each other's work, or when tool count per agent exceeds 15. If a single agent can handle it, don't add complexity.

How do agents communicate with each other?

In Microsoft Agent Framework, agents communicate through messages in a shared chat context. The runtime manages message routing, turn-taking, and termination. You don't need to build custom communication logic — the framework handles it.

What's the cost of multi-agent vs single-agent?

Multi-agent systems make 3-10x more LLM calls per user request. Each agent turn is an LLM invocation. For three agents averaging 2 turns each, that's 6 LLM calls minimum. Use a lower-cost mini deployment for workers that don't need deep reasoning to manage costs.

You Might Also Enjoy

Was this article useful?

Feedback is anonymous and helps us improve content quality.

Discussion

Engineering discussion powered by GitHub Discussions.

#Multi-Agent #Microsoft Agent Framework #.NET AI #Workshop #OpenTelemetry