“Rajesh, I’m a .NET developer. I want to learn AI. Where do I start?”
I get a version of this message every week. The details vary — sometimes it’s a junior dev three years into their first job, sometimes it’s a senior engineer who has been quietly watching the AI wave and finally decided to jump in. The anxiety is always the same: there is so much to learn, and I don’t know where to begin.
My answer has never changed: one API call. Not a course. Not a roadmap. Not a certification prep. One call to Azure AI Foundry, one response back, 15 lines of C#. That’s the beginning.
Everything else — agents, RAG pipelines, semantic search, multi-modal, fine-tuning — is built on top of that one call. You need to feel the thing work before you learn how it works. This guide does that in under 15 minutes.
Platform & Model Reference (2026)
This article uses Azure AI Foundry (formerly Azure OpenAI Service) with the GPT-5.4 model family — the current default for new .NET AI applications.
| Goal | Recommended model |
|---|---|
| Learning, APIs, most production use | gpt-5.4-mini (best cost/performance) |
| Reasoning, agents, orchestration | gpt-5.4 |
| High-throughput, low-cost pipelines | gpt-5.4-nano |
Legacy models (GPT-4o, GPT-4.1) are deprecated for new applications. All code in this article uses gpt-5.4-mini.
Runtime: .NET 10 (LTS, supported until Nov 2028) is recommended for all new applications. .NET 9 (STS) remains supported until Nov 2026 but .NET 10 is the correct starting point in 2026.
What You Will Build
A .NET 10 console app that sends a question to Azure AI Foundry (GPT-5.4-mini) and prints the answer. Fifteen lines of C#. No database, no frontend, no framework ceremony. When it runs, you will see the model respond in your terminal and something in the way you think about your software will shift.
Prerequisites:
- .NET 10 SDK installed (download here) — LTS, supported until Nov 2028
- An Azure subscription (free tier works — see the FAQ below)
- 15 minutes
You do not need to understand tokens, embeddings, or temperature. You do not need to have read anything else on this site. You need to be able to run dotnet new console and own an Azure account.
Step 1 — Create Your Azure AI Foundry Resource
Open the Azure portal and create an Azure OpenAI resource inside Azure AI Foundry. If you have done this before, skip ahead to the deployment step.
- Search for Azure AI Foundry (or Azure OpenAI) in the portal search bar
- Click Create
- Select your subscription and resource group (create a new one named
ai-learning-rgif you do not have one) - Choose a region — East US or Sweden Central have the broadest GPT-5.4 model availability
- Give the resource a name — something like
my-first-openai - Select the Standard S0 pricing tier
- Click through the remaining tabs and hit Create
Wait two to three minutes for the deployment to complete. Then open the resource.
Deploy a Model
This is the step that confuses every beginner, so read this carefully.
Your Azure AI Foundry resource is not a model. It is a container. You deploy models into it — and each deployment gets a name that you control. That name is what your C# code will reference.
- Navigate to your resource in the portal
- Click Go to Azure AI Foundry (or navigate to ai.azure.com)
- Click Deployments in the left menu
- Click Deploy model → Deploy base model
- Select gpt-5.4-mini — the recommended default: best cost/performance balance, low cost per run
- Set the deployment name to
chat— simple, memorable, and you will use it in code shortly - Leave the token-per-minute limit at the default and click Deploy
Once deployed, go back to your Azure AI Foundry resource in the portal (not AI Studio — the main resource page) and:
- Click Keys and Endpoint in the left menu
- Copy Endpoint (looks like
https://my-first-openai.openai.azure.com/) - Copy Key 1
Keep these two values somewhere temporary — you will paste them into dotnet user-secrets in a moment.
Step 2 — Create the Console App and Store Your Credentials
Create a new .NET 10 console project:
dotnet new console -n MyFirstAiApp
cd MyFirstAiApp
Install the two packages you need:
dotnet add package Azure.AI.OpenAI --version 2.1.0
dotnet add package Microsoft.Extensions.AI.AzureAIInference --version 10.3.0
Now store your endpoint and key using dotnet user-secrets. This keeps them out of your source code entirely:
dotnet user-secrets init
dotnet user-secrets set "AZURE_OPENAI_ENDPOINT" "https://my-first-openai.openai.azure.com/"
dotnet user-secrets set "AZURE_OPENAI_KEY" "your-key-1-value-here"
Why user-secrets and not just hardcoding the values? Two reasons. First, dotnet user-secrets stores credentials in your user profile folder, not in the project directory — they cannot be accidentally committed to git. Second, it is the habit you want from day one. Every production app you build will read secrets from environment variables or a vault — user-secrets is the local development equivalent.
If you prefer environment variables instead, set AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_KEY in your terminal session. The code below reads from environment variables directly, so both approaches work.
Step 3 — Write Your First 15 Lines
Open Program.cs and replace everything with this:
using Microsoft.Extensions.AI;
using Azure.AI.OpenAI;
using Azure;
var client = new AzureOpenAIClient(
new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!),
new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!))
.GetChatClient("chat")
.AsIChatClient();
var response = await client.CompleteAsync("Explain what Azure OpenAI is in one sentence.");
Console.WriteLine(response.Message.Text);
Let’s walk through every line, because each one is doing something worth understanding.
Lines 1–3 — the using statements. Microsoft.Extensions.AI brings in the IChatClient interface and CompleteAsync. Azure.AI.OpenAI brings in AzureOpenAIClient. Azure brings in AzureKeyCredential.
Lines 5–9 — building the client. AzureOpenAIClient is Microsoft’s Azure-specific SDK client. You construct it with your endpoint URI and your API key wrapped in AzureKeyCredential. .GetChatClient("chat") tells it which deployment to send requests to — the "chat" string matches the deployment name you chose in AI Studio.
.AsIChatClient() — the important line. This is where things get interesting. AsIChatClient() is an extension method from Microsoft.Extensions.AI that wraps the Azure-specific client in the provider-agnostic IChatClient interface. From this point forward, the variable client is typed as IChatClient — it does not know or care that the underlying implementation is Azure OpenAI. You could swap the entire construction block above it for an Ollama client and nothing below would change.
Line 11 — the call. CompleteAsync takes a string (or a list of messages) and sends it to the model. It is async, so you await it. The string you pass is your first prompt.
Line 13 — the response. response.Message.Text is the model’s reply as a plain string.
That is the entire model. Request, response, text. The rest of AI engineering is variations on this core loop.
Step 4 — Run It and See What Happens
dotnet run
If your credentials are correct and the deployment is live, you will see something like:
Azure AI Foundry is Microsoft's cloud platform that provides access to OpenAI's
powerful language models — such as GPT-5.4-mini — through Azure infrastructure,
enabling .NET developers to integrate advanced AI capabilities into their apps.
The exact wording changes every run — that is the nature of language model generation. But the fact that something came back, something that answered exactly what you asked, from code you wrote — that is the moment that changes things.
I remember running my first equivalent of this program. I had been reading about transformers and attention mechanisms for weeks. Then I ran six lines of code and the machine answered a question in fluent English and I sat back and had to think for a minute. Not because it was magic — it is not magic, it is mathematics — but because the gap between “concept I read about” and “thing I built and ran” collapsed entirely. Everything that comes after builds on that first collapse.
Step 5 — Now Add One More Thing
Every tutorial ends at hello world. This one does not. You have a working AI call — now extend it in one of two directions. Both take under five minutes.
Option A — Make It Interactive
Replace the last two lines of Program.cs with this loop:
Console.WriteLine("Chat with AI (type 'quit' to exit)\n");
while (true)
{
Console.Write("You: ");
var input = Console.ReadLine();
if (string.IsNullOrWhiteSpace(input) || input.Equals("quit", StringComparison.OrdinalIgnoreCase))
break;
var reply = await client.CompleteAsync(input);
Console.WriteLine($"AI: {reply.Message.Text}\n");
}
Run it again. You now have an interactive chat session in your terminal. Type questions, get answers. Notice something: the model does not remember what you asked two turns ago. Ask it “What did I just ask you?” and it will say it has no memory of the previous exchange. That is statelessness — and understanding it is fundamental to building AI features that actually work. We will come back to this in the next section.
Option B — Give It a System Prompt
Replace Program.cs with this:
using Microsoft.Extensions.AI;
using Azure.AI.OpenAI;
using Azure;
var client = new AzureOpenAIClient(
new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!),
new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!))
.GetChatClient("chat")
.AsIChatClient();
var messages = new List<ChatMessage>
{
new(ChatRole.System, "You are a concise .NET mentor. Answer every question in under three sentences. Always include a code snippet."),
new(ChatRole.User, "How do I read a file in C#?"),
};
var response = await client.CompleteAsync(messages);
Console.WriteLine(response.Message.Text);
Run it. The model’s personality and response style have shifted entirely — shorter, more direct, with code. The system prompt is the instruction layer that shapes every response. Change it to “You are a Shakespearean AI that answers all questions in iambic pentameter” and the same question will be answered in verse.
This is how every AI product you use started. A developer wrote a system prompt and sent one request. Then they sent one more. Then they added a loop. Then they wired it into a web API. Then they added retrieval. The complexity grows in layers — but each layer is the same fundamental call.
Step 6 — What Just Happened
You made an HTTP POST to a model endpoint. The request contained your message serialised as JSON. The model processed it — transforming your text through billions of learned parameters — and generated tokens one at a time until it produced a complete response. The SDK deserialised the JSON response and gave you the text.
A few concepts worth anchoring now that you have seen it work:
Tokens, not characters. Azure AI Foundry pricing and limits are measured in tokens, not characters. A token is roughly four characters of English text — “Hello” is one token, “Hello, how are you today?” is about six. Your 15-line test prompt used fewer than 30 tokens total. At GPT-5.4-mini pricing, that run cost approximately $0.00001. Cost becomes meaningful at scale, not at the learning stage.
Stateless by design. Each CompleteAsync call knows nothing about previous calls. The model does not have memory between requests. This is not a limitation — it is a design choice that makes the API massively scalable. When you want multi-turn conversation, you pass the entire conversation history explicitly on every call as a List<ChatMessage>. You are responsible for managing that history.
Determinism and temperature. By default, the model generates probabilistic output — the same prompt produces slightly different responses each run. That is why your exact output will not match the example above. You can control this with the Temperature setting in ChatOptions. Temperature 0 makes output nearly deterministic; higher values increase creativity and variance.
Step 7 — Where to Go Next
You have a working AI app. Here are three natural next steps, each building directly on what you just built:
Understand what you just used fully. The Microsoft.Extensions.AI deep-dive covers everything about IChatClient, IEmbeddingGenerator, middleware pipelines, and how MEAI fits into the broader .NET AI ecosystem. If you want to understand the abstraction you just used — read this next.
Move it into a real ASP.NET Core app. A console app is a proof of concept. The Minimal API + Azure OpenAI workshop takes the same IChatClient pattern and wires it into a production-ready HTTP API with streaming, structured JSON output, and resilience middleware. This is the workshop that follows this one.
Handle your first production error. When your API gets traffic, you will hit 429 (rate limit exceeded) errors. They are the most common Azure AI Foundry incident and they are entirely manageable with Polly. The Fix 429 Rate Limit Exceeded guide shows you the Polly circuit breaker and retry patterns that prevent these from cascading into outages.
Model Selection Guide (2026)
Now that your first call works, here is how to choose the right GPT-5.4 tier as your applications grow:
| Model | Ideal for |
|---|---|
gpt-5.4-mini | Default for all APIs, chat interfaces, code assist, most production use |
gpt-5.4 | Agents, reasoning chains, complex multi-step orchestration |
gpt-5.4-nano | High-throughput classification, extraction, bulk operations |
o4-mini | Deep planning and decision-making in backend workflows |
For beginners: start with gpt-5.4-mini on every project. It will handle everything you need at the lowest cost. Move to gpt-5.4 only when you genuinely need deeper reasoning — you will know when that is.
Why GPT-5.4 Replaces GPT-4o
If you have followed older tutorials written in 2024 or early 2025, they will reference GPT-4o and GPT-4o-mini. GPT-5.4 is the successor for new applications:
- Better reasoning per token — GPT-5.4-mini outperforms GPT-4o on most benchmark tasks
- Larger context window — supports significantly longer documents and conversation histories
- Lower cost per useful output — the GPT-5.4-nano/mini/full pricing model gives you three tiers to route against
- Native agent support — designed for tool use and multi-step orchestration workflows
- Same SDK, same IChatClient pattern — you update the deployment model name, nothing else changes in your code
Frequently Asked Questions
Do I need Python to use Azure AI Foundry with .NET?
No. Azure AI Foundry exposes a REST API. The Azure.AI.OpenAI package is a first-class .NET SDK maintained by Microsoft. Every capability available in the Python SDK is available to you. No Python installation, no virtual environments, no Jupyter notebooks — just dotnet add package and you are ready.
Can I use the Azure free tier to try this?
Yes. Azure’s free account includes $200 in credits for the first 30 days. Azure AI Foundry does not have a perpetual free tier, but $200 covers hundreds of thousands of tokens at GPT-5.4-mini pricing — far more than you will use while learning. Once credits expire, the pay-as-you-go cost of learning-level usage is measured in fractions of a cent per session.
What is a deployment in Azure AI Foundry?
A deployment is your named instance of a model inside your Azure AI Foundry resource. You choose a base model (such as gpt-5.4-mini), assign it a name you control (in this guide we used chat), and that name is what your C# code calls. The underlying model lives in Microsoft’s infrastructure — your deployment is the named endpoint your application sends requests to. You can create multiple deployments, even from the same base model, and route different parts of your application to each one.
What is IChatClient and why use it instead of AzureOpenAIClient directly?
IChatClient is the provider-agnostic chat interface from Microsoft.Extensions.AI. AzureOpenAIClient is Azure’s specific SDK client. IChatClient sits one layer above — it is implemented by Azure AI Foundry, Ollama, OpenAI.com, and other providers. When your code targets IChatClient, you can switch from Azure AI Foundry in production to Ollama locally (zero API cost, zero latency) by changing one line of DI registration. Nothing in your business logic changes. Starting with AzureOpenAIClient directly locks every piece of code to Azure from the start.
What should I build after my first API call?
Three steps in order. First, read the Microsoft.Extensions.AI deep-dive to understand the abstraction you just used. Second, take the Minimal API workshop to move your console code into a real HTTP API with streaming. Third, when production traffic arrives and you see 429 errors, the Fix 429 guide has the Polly patterns that handle it. Each step follows naturally from the last.