Skip to main content

Semantic Kernel Planners Deprecated: Migrate to Function Calling

Intermediate From GitHub Issue .NET 9 Microsoft.SemanticKernel 1.54.0
By Rajesh Mishra · Mar 21, 2026 · 13 min read
Verified Mar 2026 .NET 9 Microsoft.SemanticKernel 1.54.0
In 30 Seconds

SK's HandlebarsPlanner and FunctionCallingStepwisePlanner are deprecated in SK 1.x. Migrate to Auto Function Calling by setting FunctionChoiceBehavior.Auto() in OpenAIPromptExecutionSettings. The model iteratively calls registered [KernelFunction] plugins without an upfront plan. Use IAutoFunctionInvocationFilter to observe which functions are called. Adopt Agent Framework (ChatCompletionAgent) only when you need autonomous multi-step loops, not as a direct planner replacement.

The Deprecation — What It Means

If you are using HandlebarsPlanner or FunctionCallingStepwisePlanner in your Semantic Kernel applications, you are sitting on deprecated code. As of SK 1.54.0, both planners carry experimental deprecation markers that require explicit compiler warning suppression:

  • HandlebarsPlannerSKEXP0060
  • FunctionCallingStepwisePlannerSKEXP0061

These warnings are not noise — they are signals that the planners will be removed in a future SK 2.x release. The SK team announced this direction in GitHub release notes as function calling matured to the point where planners became redundant. The underlying reason is architectural.

Planners introduced two failure points. First, the model had to generate a complete execution plan — a sequence of function calls with arguments — before executing anything. Second, the model had to execute that pre-generated plan faithfully. Each LLM call added latency, cost, and an opportunity for the plan to become stale or invalid by the time execution reached it. If the plan contained a template error, a wrong argument, or a function call that returned an unexpected shape, the entire workflow failed. There was no mid-plan adaptation.

Auto function calling eliminates the plan generation step entirely. The model receives your goal and the list of available functions. It decides which function to call first, executes it, sees the result, then decides the next step. The model self-corrects in real time. Fewer LLM calls, fewer failure modes, simpler code.

This is not a superficial API change. It is a fundamentally better execution model. The migration is worth doing now rather than being forced into it by a breaking change.

Before: HandlebarsPlanner Pattern (Deprecated)

This is the old pattern shown for migration reference. Do not use it in new code.

// ⚠️ DEPRECATED — do not use this pattern in new code
#pragma warning disable SKEXP0060

using Microsoft.SemanticKernel.Planning.Handlebars;

var planner = new HandlebarsPlanner(new HandlebarsPlannerOptions
{
    AllowLoops = true,
    MaxTokens = 4096
});

// Planner generates a full execution plan upfront
var plan = await planner.CreatePlanAsync(kernel,
    "Research the topic, summarize it, then create a formatted report.");

// Execute the pre-generated plan
var result = await plan.InvokeAsync(kernel);
Console.WriteLine(result);

#pragma warning restore SKEXP0060

The problems with this approach are compounded at scale:

  • Upfront plan may not match actual results. If ResearchPlugin.Search returns fewer results than expected, the pre-generated template cannot adapt its subsequent steps.
  • Minimum two LLM calls. One call to generate the Handlebars template plan, another to execute each step in it. On multi-step workflows, this multiplies.
  • Template syntax errors break everything. The model generates Handlebars syntax ({{plugin.function arg1=... arg2=...}}). A malformed template fails at execution time, not at generation time.
  • No mid-plan adaptation. The plan is static once generated. If step 3 depends on step 2’s output being a certain shape and it isn’t, the workflow has no mechanism to recover.

The migration removes the planner and replaces it with FunctionChoiceBehavior.Auto() in the execution settings. Your plugins remain unchanged.

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;

// Same plugins work without changes
kernel.Plugins.AddFromType<ResearchPlugin>();
kernel.Plugins.AddFromType<SummaryPlugin>();
kernel.Plugins.AddFromType<ReportPlugin>();

var settings = new OpenAIPromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
    // No MaxAutoInvokeAttempts needed for most cases — default 128 is sufficient
};

var result = await kernel.InvokePromptAsync(
    "Research the topic, summarize it, then create a formatted report.",
    new KernelArguments(settings));

Console.WriteLine(result.GetValue<string>());

The advantages are immediate:

  • Single call — the model iterates internally. SK handles the function calling loop automatically. The model calls functions, receives results, and continues until it can produce a final answer.
  • Adapts to intermediate results. If ResearchPlugin.Search returns three results instead of ten, the model adjusts its summarization request accordingly. No upfront plan to invalidate.
  • Simpler code. No planner class, no options object, no template generation step. Set a behavior flag and pass the goal.
  • Your plugins with [KernelFunction] work identically. This is the key migration benefit: you change zero plugin code. The [KernelFunction] and [Description] attributes that SK uses for function registration are the same ones that Auto Function Calling uses to build the tool schema sent to the model.

For a deep dive into how function calling works under the hood — including how SK serializes your [KernelFunction] attributes into the OpenAI tool schema — see Function Calling and Tool Use with Semantic Kernel.

Before: FunctionCallingStepwisePlanner (Deprecated)

The FunctionCallingStepwisePlanner used a similar approach but framed around a ReAct-style (Reason + Act) loop where the model reasoned about each step explicitly before acting. It was more verbose than HandlebarsPlanner but had the same fundamental limitation: it required the model to explicitly articulate a reasoning chain at each iteration rather than simply calling the right function.

// ⚠️ DEPRECATED — do not use this pattern
#pragma warning disable SKEXP0061

using Microsoft.SemanticKernel.Planning;

var plannerConfig = new FunctionCallingStepwisePlannerOptions
{
    MaxIterations = 15,
    MaxTokens = 4096
};
var planner = new FunctionCallingStepwisePlanner(plannerConfig);

var result = await planner.ExecuteAsync(kernel, "Book a meeting room and send invites.");
Console.WriteLine(result.FinalAnswer);

#pragma warning restore SKEXP0061

Beyond the deprecation issue, this planner pattern spent tokens on explicit reasoning text at every iteration. Modern GPT-4o and GPT-4 Turbo models reason internally when deciding which function to call — that explicit token-heavy reasoning loop is unnecessary overhead. Auto Function Calling eliminates it.

After: Agent Framework (For True Agent Loops)

When migrating from FunctionCallingStepwisePlanner, the natural question is whether to use the Agent Framework instead of plain Auto Function Calling. The answer depends on your specific requirements.

Use Auto Function Calling first. For the vast majority of planner use cases — sequential task automation, chatbots with tools, RAG with tool augmentation — FunctionChoiceBehavior.Auto() is sufficient. It has less overhead, simpler code, and no additional dependencies.

Use Agent Framework only when you need capabilities that go beyond a single call. The ChatCompletionAgent adds value when you need persistent thread state across multiple user turns, named agent personas with distinct instructions, or multi-agent coordination via AgentGroupChat.

using Microsoft.SemanticKernel.Agents;

var agent = new ChatCompletionAgent
{
    Kernel = kernel,
    Name = "WorkflowAgent",
    Instructions = "You are an assistant that books meeting rooms and sends calendar invites. Use the available tools to complete tasks autonomously."
};

var thread = new ChatHistoryAgentThread();
var message = new ChatMessageContent(AuthorRole.User, "Book a meeting room for tomorrow at 2pm and send invites to the team.");

await foreach (var response in agent.InvokeAsync(message, thread))
{
    Console.WriteLine(response.Content);
}

The ChatHistoryAgentThread maintains conversation state across invocations. If you call agent.InvokeAsync again with the same thread, it carries forward the history of what was booked, what invites were sent, and what the user confirmed. That persistent state is what FunctionCallingStepwisePlanner could not provide.

For a comparison of when to choose Agent Framework versus plain SK versus Microsoft.Extensions.AI, see Microsoft.Extensions.AI vs Semantic Kernel vs Agent Framework.

Agent Framework use cases:

  • Persistent thread state across multiple turns — The agent remembers what happened in previous calls. ChatHistoryAgentThread maintains full conversation context.
  • Agent-to-agent communicationAgentGroupChat coordinates multiple specialized agents (a researcher, a writer, a reviewer) on a shared task.
  • Named agent personas with distinct instructions — Each ChatCompletionAgent has its own Instructions field, allowing specialized behavior per agent role.

For most single-workflow planner replacements, Auto Function Calling is simpler and sufficient.

Observing Function Calls with IAutoFunctionInvocationFilter

One concern with removing planners is losing visibility into what the model is doing. Planners made function selection explicit — you could inspect the generated plan. With Auto Function Calling, function selection happens inside the LLM call loop.

IAutoFunctionInvocationFilter restores that visibility. Implement it to intercept every auto-invoked function call before and after execution.

using Microsoft.SemanticKernel;

public class FunctionCallLoggingFilter : IAutoFunctionInvocationFilter
{
    private readonly ILogger<FunctionCallLoggingFilter> _logger;

    public FunctionCallLoggingFilter(ILogger<FunctionCallLoggingFilter> logger)
    {
        _logger = logger;
    }

    public async Task OnAutoFunctionInvocationAsync(
        AutoFunctionInvocationContext context,
        Func<AutoFunctionInvocationContext, Task> next)
    {
        _logger.LogInformation(
            "Auto-invoking {Plugin}.{Function} with args: {Args}",
            context.Function.PluginName,
            context.Function.Name,
            string.Join(", ", context.Arguments.Select(a => $"{a.Key}={a.Value}")));

        await next(context);

        _logger.LogInformation(
            "{Function} returned: {Result}",
            context.Function.Name,
            context.Result?.GetValue<string>()?.Substring(0, Math.Min(100, context.Result.GetValue<string>()?.Length ?? 0)));
    }
}

Register the filter via dependency injection or directly on the kernel:

// Via DI
builder.Services.AddSingleton<IAutoFunctionInvocationFilter, FunctionCallLoggingFilter>();

// OR via kernel directly
kernel.AutoFunctionInvocationFilters.Add(new FunctionCallLoggingFilter(logger));

The filter gives you exactly what inspecting a planner’s generated plan gave you — the sequence of function calls and their arguments — but in real time as execution happens rather than as a static preview. You can also use filters to implement circuit breaking: if a function has been called more than N times in a single chain, cancel further invocations by not calling next(context).

Migration Decision Table

ScenarioRecommended Approach
Sequential task automationAuto Function Calling
Simple chatbot with toolsAuto Function Calling
RAG with tool augmentationAuto Function Calling
Multi-agent coordinationAgent Framework
Long-running stateful workflowsAgent Framework
Strict execution order requiredManual C# orchestration

The “Strict execution order required” row is important. If your business logic demands that validation always runs before processing, and that notification always runs after processing, do not rely on the model’s discretion to enforce that order. Write explicit C# code that calls each [KernelFunction] in sequence. Auto Function Calling is powerful, but it is still a model making decisions. Critical business rules belong in code, not in model behavior.

Complete Migration Checklist

Work through these steps for each planner usage in your codebase:

  1. Find all usages of HandlebarsPlanner and FunctionCallingStepwisePlanner. Search for both the class names and the #pragma warning disable SKEXP0060 / SKEXP0061 suppressions — those suppressions are reliable markers of planner code.

  2. Remove the #pragma warning disable suppressions. You will not need them once the planner code is gone. Leaving them in creates false safety — they suppress warnings without indicating what was suppressed or why.

  3. Replace each planner invocation with kernel.InvokePromptAsync(goal, new KernelArguments(settings)) where settings has FunctionChoiceBehavior.Auto(). The goal string is the same task description you were passing to CreatePlanAsync or ExecuteAsync.

  4. Verify your plugins still have [KernelFunction] attributes. They should — these attributes work identically with Auto Function Calling. No plugin code changes are needed.

  5. Add IAutoFunctionInvocationFilter for observability. This is the equivalent of inspecting a planner’s generated plan. Without it, you have no visibility into which functions the model is choosing.

  6. Test with representative goals from your production workload. Auto Function Calling may select functions in a different order than your planner templates specified. Verify the outputs match expectations for your real use cases, not just synthetic test cases.

Further Reading

⚠ Production Considerations

  • FunctionChoiceBehavior.Auto() may call functions in a different order than your HandlebarsPlanner template specified. If your workflow requires a strict execution order (e.g., always validate before processing), use FunctionChoiceBehavior.Required to force specific function calls, or implement the orchestration logic yourself rather than relying on the model's discretion.
  • Auto function calling can call functions in unexpected ways or invoke them more times than expected. Set MaxAutoInvokeAttempts on your execution settings (default is 128) to cap runaway function call loops. Monitor with IAutoFunctionInvocationFilter and add circuit-breaking logic for production workloads.

Enjoying this article?

Get weekly .NET + AI insights delivered to your inbox. No spam.

Subscribe Free →

🧠 Architect’s Note

Treat the planner deprecation as an opportunity to simplify your AI orchestration. Planners added complexity to solve a problem that function calling solves more elegantly. If your planner use case is complex enough to need strict orchestration control, write explicit C# orchestration code — not AI-generated plans — for reliability.

AI-Friendly Summary

Summary

SK's HandlebarsPlanner and FunctionCallingStepwisePlanner are deprecated in SK 1.x. Migrate to Auto Function Calling by setting FunctionChoiceBehavior.Auto() in OpenAIPromptExecutionSettings. The model iteratively calls registered [KernelFunction] plugins without an upfront plan. Use IAutoFunctionInvocationFilter to observe which functions are called. Adopt Agent Framework (ChatCompletionAgent) only when you need autonomous multi-step loops, not as a direct planner replacement.

Key Takeaways

  • Planners are deprecated — migrate now to avoid a future breaking change in SK 2.x
  • FunctionChoiceBehavior.Auto() is the direct replacement for both planner types
  • Plugins decorated with [KernelFunction] work identically — no changes needed
  • IAutoFunctionInvocationFilter provides observability into which functions run
  • Agent Framework is NOT the direct planner replacement — it adds autonomous loops and multi-agent

Implementation Checklist

  • Remove HandlebarsPlanner and FunctionCallingStepwisePlanner usages
  • Set FunctionChoiceBehavior.Auto() in OpenAIPromptExecutionSettings
  • Keep all [KernelFunction] plugins unchanged — they work with Auto Function Calling
  • Add IAutoFunctionInvocationFilter for observability
  • Test with representative queries to ensure function selection matches expected behavior
  • Consider ChatCompletionAgent only if you need persistent agent state or multi-agent

Frequently Asked Questions

Are Semantic Kernel planners completely removed or just deprecated?

Planners are deprecated but not yet removed as of SK 1.54.0. They require suppressing compiler warnings with #pragma warning disable SKEXP0060 (HandlebarsPlanner) and SKEXP0061 (FunctionCallingStepwisePlanner). They will likely be removed in a future SK 2.x release. Migrate now to avoid a breaking change later.

Why did Microsoft deprecate SK planners?

Planners had two failure points: the LLM generates a plan, then the LLM executes the plan — double the LLM calls and double the error surface. Auto function calling via FunctionChoiceBehavior.Auto() is more reliable because the model generates and executes one step at a time, self-correcting as it goes.

What is FunctionChoiceBehavior.Auto() and how does it replace planners?

FunctionChoiceBehavior.Auto() tells SK to give the model all registered plugin functions and let it decide which to call, in what order, with what arguments. The model executes function calls iteratively — similar to how planners worked, but without generating an upfront plan that can become stale or invalid.

How do I observe which functions are being called during Auto function calling?

Implement IAutoFunctionInvocationFilter. Register it via kernel.AutoFunctionInvocationFilters.Add(filter) or via DI. In OnAutoFunctionInvocationAsync, read context.Function.Name and context.Arguments to log which function is being called with what parameters before invoking next(context).

Should I migrate directly to Auto Function Calling or to Agent Framework?

For most scenarios — replacing a planner with equivalent functionality — migrate to Auto Function Calling. It is simpler, has less overhead, and is sufficient for linear workflows. Only adopt Agent Framework (ChatCompletionAgent) when you need true autonomous looping, multi-agent coordination, or persistent state across sessions.

Does Auto Function Calling work with all Azure OpenAI models?

Auto Function Calling requires models that support function calling / tool use. GPT-4o, GPT-4o-mini, and GPT-4 Turbo all support it. Older models (GPT-3.5-Turbo with older API versions) may have limitations. Check the Azure OpenAI model capabilities documentation for your specific deployment.

How do I migrate HandlebarsPlanner templates to prompt engineering?

HandlebarsPlanner used templates like {{plugin.function arg1=... arg2=...}}. In Auto Function Calling, you don't write templates — you write a natural language goal in your prompt. The model determines which functions to call. Replace template-based orchestration with clear task descriptions that guide the model's function selection.

Track your progress through this learning path.

You Might Also Enjoy

Was this article useful?

Feedback is anonymous and helps us improve content quality.

Discussion

Engineering discussion powered by GitHub Discussions.

#Semantic Kernel #Planners Deprecated #Function Calling #Migration #.NET AI