The Azure.AI.OpenAI v2 SDK is not a minor update — it is a complete redesign. If you wrote Azure OpenAI code against v1, every type name related to chat is different. The compiler will tell you about all of them at once. This guide walks through every change systematically, shows you what the v1 code looked like, and gives you the exact v2 replacement for each pattern.
1. What Changed and Why
In v1, Azure.AI.OpenAI was a standalone SDK that duplicated OpenAI’s types. In v2, it became a thin Azure-specific wrapper over the official OpenAI .NET SDK maintained by OpenAI directly. The result is two packages that work together:
| Package | What it provides |
|---|---|
Azure.AI.OpenAI 2.x | AzureOpenAIClient, AzureKeyCredential, AzureOpenAIClientOptions |
OpenAI 2.x | ChatClient, ChatMessage, ChatCompletion, ChatTool, and all core types |
Your v1 code imported Azure.AI.OpenAI for everything. Your v2 code imports from both namespaces — Azure.AI.OpenAI for the client, OpenAI.Chat for the types.
This architectural shift also means Azure OpenAI and non-Azure OpenAI .NET code now share the same types. If you have an abstraction layer (an IChatService that could run against either endpoint), the payload types no longer need to differ.
2. Package Changes
v1 csproj
<PackageReference Include="Azure.AI.OpenAI" Version="1.0.0-beta.17" />
v2 csproj
<PackageReference Include="Azure.AI.OpenAI" Version="2.1.0" />
The OpenAI package (version 2.1.0) is pulled in automatically as a transitive dependency. You do not need to add it explicitly, but you can pin the version if you need a specific build:
<!-- Optional: pin OpenAI base package version -->
<PackageReference Include="OpenAI" Version="2.1.0" />
To upgrade via the CLI:
dotnet remove package Azure.AI.OpenAI
dotnet add package Azure.AI.OpenAI --version 2.1.0
3. Client Initialization
This is the first change that breaks compilation.
v1 — OpenAIClient
using Azure;
using Azure.AI.OpenAI;
// v1: OpenAIClient took endpoint + credential directly
var client = new OpenAIClient(
new Uri("https://<resource>.openai.azure.com/"),
new AzureKeyCredential("<api-key>"));
// Chat was called on the client with the deployment name as a parameter
ChatCompletionsOptions options = new()
{
DeploymentName = "my-gpt4o-deployment",
Messages =
{
new ChatRequestSystemMessage("You are a helpful assistant."),
new ChatRequestUserMessage("What is dependency injection?")
}
};
Response<ChatCompletions> response = await client.GetChatCompletionsAsync(options);
string reply = response.Value.Choices[0].Message.Content;
v2 — AzureOpenAIClient + ChatClient
using Azure;
using Azure.AI.OpenAI;
using OpenAI.Chat;
// v2: AzureOpenAIClient is the Azure entry point
var azureClient = new AzureOpenAIClient(
new Uri("https://<resource>.openai.azure.com/"),
new AzureKeyCredential("<api-key>"));
// Get a ChatClient scoped to a specific deployment
ChatClient chatClient = azureClient.GetChatClient("my-gpt4o-deployment");
// Messages are now typed from OpenAI.Chat
List<ChatMessage> messages =
[
new SystemChatMessage("You are a helpful assistant."),
new UserChatMessage("What is dependency injection?")
];
// CompleteChatAsync returns ClientResult<ChatCompletion>
ClientResult<ChatCompletion> result = await chatClient.CompleteChatAsync(messages);
ChatCompletion completion = result.Value;
string reply = completion.Content[0].Text;
Two structural changes to note:
GetChatClient(deploymentName)replaces embedding the deployment name insideChatCompletionsOptions. You create oneChatClientper deployment and reuse it.ClientResult<ChatCompletion>wraps the response. Always access.Valueto get the actualChatCompletionobject.
4. Chat Message Types
Every message type was renamed. The v1 ChatRequest* prefix is gone.
| v1 Type | v2 Type | Namespace |
|---|---|---|
ChatRequestSystemMessage | SystemChatMessage | OpenAI.Chat |
ChatRequestUserMessage | UserChatMessage | OpenAI.Chat |
ChatRequestAssistantMessage | AssistantChatMessage | OpenAI.Chat |
There are also factory methods on the ChatMessage base class if you prefer:
using OpenAI.Chat;
// Constructor style (preferred for clarity)
var system = new SystemChatMessage("You are a helpful assistant.");
var user = new UserChatMessage("Explain LINQ.");
var assistant = new AssistantChatMessage("LINQ stands for Language Integrated Query...");
// Factory method style (equivalent)
ChatMessage systemAlt = ChatMessage.CreateSystemMessage("You are a helpful assistant.");
ChatMessage userAlt = ChatMessage.CreateUserMessage("Explain LINQ.");
Both styles compile and behave identically. The constructor style is more idiomatic in v2 code.
Before and after a multi-turn conversation:
// v1
var history = new List<ChatRequestMessage>
{
new ChatRequestSystemMessage("You help with C# questions."),
new ChatRequestUserMessage("What is a record type?"),
new ChatRequestAssistantMessage("A record is an immutable reference type..."),
new ChatRequestUserMessage("Can records be mutable?")
};
ChatCompletionsOptions options = new()
{
DeploymentName = "gpt4o",
Messages = history
};
Response<ChatCompletions> response = await client.GetChatCompletionsAsync(options);
// v2
var history = new List<ChatMessage>
{
new SystemChatMessage("You help with C# questions."),
new UserChatMessage("What is a record type?"),
new AssistantChatMessage("A record is an immutable reference type..."),
new UserChatMessage("Can records be mutable?")
};
// No DeploymentName on options — it moved to GetChatClient()
ClientResult<ChatCompletion> result = await chatClient.CompleteChatAsync(history);
string reply = result.Value.Content[0].Text;
5. Streaming Changes
Streaming is where the API surface changed the most.
v1 — GetChatCompletionsStreamingAsync
using Azure.AI.OpenAI;
// v1 streaming was called on OpenAIClient
StreamingResponse<StreamingChatCompletionsUpdate> streamingResponse =
await client.GetChatCompletionsStreamingAsync(options);
await foreach (StreamingChatCompletionsUpdate update in streamingResponse)
{
if (update.ContentUpdate != null)
{
Console.Write(update.ContentUpdate);
}
}
v2 — CompleteChatStreamingAsync
using Azure.AI.OpenAI;
using OpenAI.Chat;
ChatClient chatClient = azureClient.GetChatClient("my-gpt4o-deployment");
List<ChatMessage> messages =
[
new SystemChatMessage("Be concise."),
new UserChatMessage("Explain async/await in C#.")
];
// v2: called on ChatClient, not on AzureOpenAIClient
AsyncCollectionResult<StreamingChatCompletionUpdate> streamingResult =
chatClient.CompleteChatStreamingAsync(messages);
await foreach (StreamingChatCompletionUpdate update in streamingResult)
{
foreach (ChatMessageContentPart part in update.ContentUpdate)
{
Console.Write(part.Text);
}
}
Key differences:
- Called on
ChatClient, not onAzureOpenAIClient - Returns
AsyncCollectionResult<StreamingChatCompletionUpdate>(singularUpdate, notUpdates) - Content tokens are
ChatMessageContentPartobjects accessed via.Text— not a raw string on the update - No
awaitneeded onCompleteChatStreamingAsync()itself — the enumeration is deferred
For a complete streaming endpoint implementation with SSE and conversation history, see the Azure OpenAI Streaming Chat API workshop.
6. Function Calling
Function calling was substantially simplified in v2. FunctionDefinition is replaced by ChatTool.
v1 — FunctionDefinition
using Azure.AI.OpenAI;
var getWeatherFunction = new FunctionDefinition("get_current_weather")
{
Description = "Get the current weather for a location",
Parameters = BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
location = new { type = "string", description = "City and state" },
unit = new { type = "string", @enum = new[] { "celsius", "fahrenheit" } }
},
required = new[] { "location" }
})
};
ChatCompletionsOptions options = new()
{
DeploymentName = "gpt4o",
Messages = { new ChatRequestUserMessage("What is the weather in Seattle?") },
Tools = { new ChatCompletionsFunctionToolDefinition(getWeatherFunction) },
ToolChoice = ChatCompletionsToolChoice.Auto
};
v2 — ChatTool.CreateFunctionTool
using OpenAI.Chat;
// v2: Create a ChatTool directly — no FunctionDefinition wrapper needed
ChatTool getWeatherTool = ChatTool.CreateFunctionTool(
functionName: "get_current_weather",
functionDescription: "Get the current weather for a location",
functionParameters: BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
location = new { type = "string", description = "City and state" },
unit = new { type = "string", @enum = new[] { "celsius", "fahrenheit" } }
},
required = new[] { "location" }
}));
ChatCompletionOptions options = new()
{
Tools = { getWeatherTool },
ToolChoice = ChatToolChoice.Auto
};
List<ChatMessage> messages = [new UserChatMessage("What is the weather in Seattle?")];
ClientResult<ChatCompletion> result = await chatClient.CompleteChatAsync(messages, options);
To handle the tool call response and return the result:
ChatCompletion completion = result.Value;
if (completion.FinishReason == ChatFinishReason.ToolCalls)
{
// Process tool calls
foreach (ChatToolCall toolCall in completion.ToolCalls)
{
string toolName = toolCall.FunctionName;
string toolArgs = toolCall.FunctionArguments.ToString();
// Call your actual function, get the result
string toolResult = CallWeatherApi(toolArgs);
// Add tool result back to the conversation
messages.Add(new AssistantChatMessage(completion));
messages.Add(new ToolChatMessage(toolCall.Id, toolResult));
}
// Continue the conversation
result = await chatClient.CompleteChatAsync(messages, options);
}
7. Authentication Patterns
Authentication uses the same concepts but with the updated client type.
API Key (AzureKeyCredential)
using Azure;
using Azure.AI.OpenAI;
// v1
var v1Client = new OpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(apiKey));
// v2 — same credential, different client class
var v2Client = new AzureOpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(apiKey));
Managed Identity / DefaultAzureCredential
using Azure.AI.OpenAI;
using Azure.Identity;
// v1
var v1Client = new OpenAIClient(
new Uri(endpoint),
new DefaultAzureCredential());
// v2 — same pattern
var v2Client = new AzureOpenAIClient(
new Uri(endpoint),
new DefaultAzureCredential());
Configuring client options (retry, API version)
using Azure.AI.OpenAI;
using System.ClientModel.Primitives;
// v2 client options
var options = new AzureOpenAIClientOptions
{
RetryPolicy = new ClientRetryPolicy(maxRetries: 5)
};
var client = new AzureOpenAIClient(
new Uri(endpoint),
new AzureKeyCredential(apiKey),
options);
For production workloads with sustained traffic, the built-in ClientRetryPolicy handles 429 responses with exponential backoff — but for circuit breaking and advanced resilience, see the guide on fixing Azure OpenAI 429 Too Many Requests in .NET.
For Polly-based resilience pipelines layered on top of the SDK, see Add Resilience to AI Calls in .NET with Polly.
8. Compile Error to Fix Reference Table
When you upgrade the package and rebuild, the compiler will report errors on every v1 type. Use this table to resolve them systematically:
| v1 Compile Error | v2 Fix |
|---|---|
ChatRequestUserMessage not found | Use new UserChatMessage(text) from OpenAI.Chat |
ChatRequestAssistantMessage not found | Use new AssistantChatMessage(text) |
ChatRequestSystemMessage not found | Use new SystemChatMessage(text) |
ChatCompletionsOptions not found | Use ChatCompletionOptions (no ‘s’) |
ChatCompletions not found | Use ChatCompletion (singular) |
Response<ChatCompletions> not found | Use ClientResult<ChatCompletion> |
OpenAIClient not found | Use AzureOpenAIClient for Azure endpoints |
FunctionDefinition not found | Use ChatTool.CreateFunctionTool(name, description, parameters) |
ChatChoice not found | Use completion.Choices[0] directly (type is ChatChoice but access changed) |
GetChatCompletionsStreamingAsync not found | Use chatClient.CompleteChatStreamingAsync() on ChatClient |
StreamingChatCompletionsUpdate not found | Use StreamingChatCompletionUpdate (singular, no ‘s’) |
update.ContentUpdate is a string | It is now IReadOnlyList<ChatMessageContentPart> — use part.Text |
ChatCompletionsFunctionToolDefinition not found | Use ChatTool.CreateFunctionTool(...) directly |
ChatCompletionsToolChoice not found | Use ChatToolChoice.Auto |
response.Value.Choices[0].Message.Content is a string | It is now IReadOnlyList<ChatMessageContentPart> — use completion.Content[0].Text |
.Usage.PromptTokens not found | Use .Usage.InputTokenCount |
.Usage.CompletionTokens not found | Use .Usage.OutputTokenCount |
After resolving all compile errors, verify token usage logging if your application tracks costs:
// v1 token properties
int prompt = response.Value.Usage.PromptTokens;
int completion = response.Value.Usage.CompletionTokens;
// v2 token properties — renamed
ChatCompletion result = clientResult.Value;
int inputTokens = result.Usage.InputTokenCount;
int outputTokens = result.Usage.OutputTokenCount;
int totalTokens = result.Usage.TotalTokenCount;
Summary
The v1-to-v2 migration in Azure.AI.OpenAI is extensive but mechanical. Every change follows a consistent pattern:
OpenAIClient→AzureOpenAIClient→ call.GetChatClient(deployment)to get aChatClient- All
ChatRequest*message types →*ChatMessagetypes fromOpenAI.Chat ChatCompletions(plural) →ChatCompletion(singular), wrapped inClientResult<T>- Streaming:
GetChatCompletionsStreamingAsyncon the main client →CompleteChatStreamingAsynconChatClient - Function calling:
FunctionDefinition→ChatTool.CreateFunctionTool - Options class:
ChatCompletionsOptions→ChatCompletionOptions(drop the extra ‘s’)
The biggest benefit of the upgrade is access to the shared OpenAI .NET SDK type surface — future features from OpenAI’s official SDK land in your Azure workloads faster, without waiting for Azure-specific wrappers.
Start with npm run build after updating the package. Treat each compiler error as a checklist item using the table in section 8. Most migrations complete within one build cycle.