The Deprecation — What It Means
If you are using HandlebarsPlanner or FunctionCallingStepwisePlanner in your Semantic Kernel applications, you are sitting on deprecated code. As of SK 1.54.0, both planners carry experimental deprecation markers that require explicit compiler warning suppression:
HandlebarsPlanner→SKEXP0060FunctionCallingStepwisePlanner→SKEXP0061
These warnings are not noise — they are signals that the planners will be removed in a future SK 2.x release. The SK team announced this direction in GitHub release notes as function calling matured to the point where planners became redundant. The underlying reason is architectural.
Planners introduced two failure points. First, the model had to generate a complete execution plan — a sequence of function calls with arguments — before executing anything. Second, the model had to execute that pre-generated plan faithfully. Each LLM call added latency, cost, and an opportunity for the plan to become stale or invalid by the time execution reached it. If the plan contained a template error, a wrong argument, or a function call that returned an unexpected shape, the entire workflow failed. There was no mid-plan adaptation.
Auto function calling eliminates the plan generation step entirely. The model receives your goal and the list of available functions. It decides which function to call first, executes it, sees the result, then decides the next step. The model self-corrects in real time. Fewer LLM calls, fewer failure modes, simpler code.
This is not a superficial API change. It is a fundamentally better execution model. The migration is worth doing now rather than being forced into it by a breaking change.
Before: HandlebarsPlanner Pattern (Deprecated)
This is the old pattern shown for migration reference. Do not use it in new code.
// ⚠️ DEPRECATED — do not use this pattern in new code
#pragma warning disable SKEXP0060
using Microsoft.SemanticKernel.Planning.Handlebars;
var planner = new HandlebarsPlanner(new HandlebarsPlannerOptions
{
AllowLoops = true,
MaxTokens = 4096
});
// Planner generates a full execution plan upfront
var plan = await planner.CreatePlanAsync(kernel,
"Research the topic, summarize it, then create a formatted report.");
// Execute the pre-generated plan
var result = await plan.InvokeAsync(kernel);
Console.WriteLine(result);
#pragma warning restore SKEXP0060
The problems with this approach are compounded at scale:
- Upfront plan may not match actual results. If
ResearchPlugin.Searchreturns fewer results than expected, the pre-generated template cannot adapt its subsequent steps. - Minimum two LLM calls. One call to generate the Handlebars template plan, another to execute each step in it. On multi-step workflows, this multiplies.
- Template syntax errors break everything. The model generates Handlebars syntax (
{{plugin.function arg1=... arg2=...}}). A malformed template fails at execution time, not at generation time. - No mid-plan adaptation. The plan is static once generated. If step 3 depends on step 2’s output being a certain shape and it isn’t, the workflow has no mechanism to recover.
After: Auto Function Calling (Recommended)
The migration removes the planner and replaces it with FunctionChoiceBehavior.Auto() in the execution settings. Your plugins remain unchanged.
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
// Same plugins work without changes
kernel.Plugins.AddFromType<ResearchPlugin>();
kernel.Plugins.AddFromType<SummaryPlugin>();
kernel.Plugins.AddFromType<ReportPlugin>();
var settings = new OpenAIPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
// No MaxAutoInvokeAttempts needed for most cases — default 128 is sufficient
};
var result = await kernel.InvokePromptAsync(
"Research the topic, summarize it, then create a formatted report.",
new KernelArguments(settings));
Console.WriteLine(result.GetValue<string>());
The advantages are immediate:
- Single call — the model iterates internally. SK handles the function calling loop automatically. The model calls functions, receives results, and continues until it can produce a final answer.
- Adapts to intermediate results. If
ResearchPlugin.Searchreturns three results instead of ten, the model adjusts its summarization request accordingly. No upfront plan to invalidate. - Simpler code. No planner class, no options object, no template generation step. Set a behavior flag and pass the goal.
- Your plugins with
[KernelFunction]work identically. This is the key migration benefit: you change zero plugin code. The[KernelFunction]and[Description]attributes that SK uses for function registration are the same ones that Auto Function Calling uses to build the tool schema sent to the model.
For a deep dive into how function calling works under the hood — including how SK serializes your [KernelFunction] attributes into the OpenAI tool schema — see Function Calling and Tool Use with Semantic Kernel.
Before: FunctionCallingStepwisePlanner (Deprecated)
The FunctionCallingStepwisePlanner used a similar approach but framed around a ReAct-style (Reason + Act) loop where the model reasoned about each step explicitly before acting. It was more verbose than HandlebarsPlanner but had the same fundamental limitation: it required the model to explicitly articulate a reasoning chain at each iteration rather than simply calling the right function.
// ⚠️ DEPRECATED — do not use this pattern
#pragma warning disable SKEXP0061
using Microsoft.SemanticKernel.Planning;
var plannerConfig = new FunctionCallingStepwisePlannerOptions
{
MaxIterations = 15,
MaxTokens = 4096
};
var planner = new FunctionCallingStepwisePlanner(plannerConfig);
var result = await planner.ExecuteAsync(kernel, "Book a meeting room and send invites.");
Console.WriteLine(result.FinalAnswer);
#pragma warning restore SKEXP0061
Beyond the deprecation issue, this planner pattern spent tokens on explicit reasoning text at every iteration. Modern GPT-4o and GPT-4 Turbo models reason internally when deciding which function to call — that explicit token-heavy reasoning loop is unnecessary overhead. Auto Function Calling eliminates it.
After: Agent Framework (For True Agent Loops)
When migrating from FunctionCallingStepwisePlanner, the natural question is whether to use the Agent Framework instead of plain Auto Function Calling. The answer depends on your specific requirements.
Use Auto Function Calling first. For the vast majority of planner use cases — sequential task automation, chatbots with tools, RAG with tool augmentation — FunctionChoiceBehavior.Auto() is sufficient. It has less overhead, simpler code, and no additional dependencies.
Use Agent Framework only when you need capabilities that go beyond a single call. The ChatCompletionAgent adds value when you need persistent thread state across multiple user turns, named agent personas with distinct instructions, or multi-agent coordination via AgentGroupChat.
using Microsoft.SemanticKernel.Agents;
var agent = new ChatCompletionAgent
{
Kernel = kernel,
Name = "WorkflowAgent",
Instructions = "You are an assistant that books meeting rooms and sends calendar invites. Use the available tools to complete tasks autonomously."
};
var thread = new ChatHistoryAgentThread();
var message = new ChatMessageContent(AuthorRole.User, "Book a meeting room for tomorrow at 2pm and send invites to the team.");
await foreach (var response in agent.InvokeAsync(message, thread))
{
Console.WriteLine(response.Content);
}
The ChatHistoryAgentThread maintains conversation state across invocations. If you call agent.InvokeAsync again with the same thread, it carries forward the history of what was booked, what invites were sent, and what the user confirmed. That persistent state is what FunctionCallingStepwisePlanner could not provide.
For a comparison of when to choose Agent Framework versus plain SK versus Microsoft.Extensions.AI, see Microsoft.Extensions.AI vs Semantic Kernel vs Agent Framework.
Agent Framework use cases:
- Persistent thread state across multiple turns — The agent remembers what happened in previous calls.
ChatHistoryAgentThreadmaintains full conversation context. - Agent-to-agent communication —
AgentGroupChatcoordinates multiple specialized agents (a researcher, a writer, a reviewer) on a shared task. - Named agent personas with distinct instructions — Each
ChatCompletionAgenthas its ownInstructionsfield, allowing specialized behavior per agent role.
For most single-workflow planner replacements, Auto Function Calling is simpler and sufficient.
Observing Function Calls with IAutoFunctionInvocationFilter
One concern with removing planners is losing visibility into what the model is doing. Planners made function selection explicit — you could inspect the generated plan. With Auto Function Calling, function selection happens inside the LLM call loop.
IAutoFunctionInvocationFilter restores that visibility. Implement it to intercept every auto-invoked function call before and after execution.
using Microsoft.SemanticKernel;
public class FunctionCallLoggingFilter : IAutoFunctionInvocationFilter
{
private readonly ILogger<FunctionCallLoggingFilter> _logger;
public FunctionCallLoggingFilter(ILogger<FunctionCallLoggingFilter> logger)
{
_logger = logger;
}
public async Task OnAutoFunctionInvocationAsync(
AutoFunctionInvocationContext context,
Func<AutoFunctionInvocationContext, Task> next)
{
_logger.LogInformation(
"Auto-invoking {Plugin}.{Function} with args: {Args}",
context.Function.PluginName,
context.Function.Name,
string.Join(", ", context.Arguments.Select(a => $"{a.Key}={a.Value}")));
await next(context);
_logger.LogInformation(
"{Function} returned: {Result}",
context.Function.Name,
context.Result?.GetValue<string>()?.Substring(0, Math.Min(100, context.Result.GetValue<string>()?.Length ?? 0)));
}
}
Register the filter via dependency injection or directly on the kernel:
// Via DI
builder.Services.AddSingleton<IAutoFunctionInvocationFilter, FunctionCallLoggingFilter>();
// OR via kernel directly
kernel.AutoFunctionInvocationFilters.Add(new FunctionCallLoggingFilter(logger));
The filter gives you exactly what inspecting a planner’s generated plan gave you — the sequence of function calls and their arguments — but in real time as execution happens rather than as a static preview. You can also use filters to implement circuit breaking: if a function has been called more than N times in a single chain, cancel further invocations by not calling next(context).
Migration Decision Table
| Scenario | Recommended Approach |
|---|---|
| Sequential task automation | Auto Function Calling |
| Simple chatbot with tools | Auto Function Calling |
| RAG with tool augmentation | Auto Function Calling |
| Multi-agent coordination | Agent Framework |
| Long-running stateful workflows | Agent Framework |
| Strict execution order required | Manual C# orchestration |
The “Strict execution order required” row is important. If your business logic demands that validation always runs before processing, and that notification always runs after processing, do not rely on the model’s discretion to enforce that order. Write explicit C# code that calls each [KernelFunction] in sequence. Auto Function Calling is powerful, but it is still a model making decisions. Critical business rules belong in code, not in model behavior.
Complete Migration Checklist
Work through these steps for each planner usage in your codebase:
-
Find all usages of
HandlebarsPlannerandFunctionCallingStepwisePlanner. Search for both the class names and the#pragma warning disable SKEXP0060/SKEXP0061suppressions — those suppressions are reliable markers of planner code. -
Remove the
#pragma warning disablesuppressions. You will not need them once the planner code is gone. Leaving them in creates false safety — they suppress warnings without indicating what was suppressed or why. -
Replace each planner invocation with
kernel.InvokePromptAsync(goal, new KernelArguments(settings))wheresettingshasFunctionChoiceBehavior.Auto(). The goal string is the same task description you were passing toCreatePlanAsyncorExecuteAsync. -
Verify your plugins still have
[KernelFunction]attributes. They should — these attributes work identically with Auto Function Calling. No plugin code changes are needed. -
Add
IAutoFunctionInvocationFilterfor observability. This is the equivalent of inspecting a planner’s generated plan. Without it, you have no visibility into which functions the model is choosing. -
Test with representative goals from your production workload. Auto Function Calling may select functions in a different order than your planner templates specified. Verify the outputs match expectations for your real use cases, not just synthetic test cases.
Further Reading
- Semantic Kernel function calling documentation
- SK Planners deprecation GitHub issue
- University: Function Calling and Tool Use with Semantic Kernel