The Error
Your .NET application calls Azure OpenAI and receives this response:
Azure.RequestFailedException: 400 (Bad Request)
Content:
{
"error": {
"code": "content_filter",
"message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy.",
"status": 400,
"innererror": {
"code": "ResponsibleAIPolicyViolation",
"content_filter_result": {
"hate": { "filtered": true, "severity": "medium" },
"self_harm": { "filtered": false, "severity": "safe" },
"sexual": { "filtered": false, "severity": "safe" },
"violence": { "filtered": false, "severity": "safe" }
}
}
}
}
This is an HTTP 400 Bad Request — not a server error. The model did not fail; it refused. Azure OpenAI’s content management system evaluated your prompt or the model’s response against its configured policies and decided to block it. One category with "filtered": true is all it takes for the entire request to be rejected.
The inner error code ResponsibleAIPolicyViolation is the SDK-level identifier for this class of error. The specific content_filter_result block tells you exactly which content category triggered the block and at what severity level.
Fixes at a Glance
- Read the raw response body — catch
ClientResultException, parse the JSON to identify which category was triggered, and log it server-side before returning a clean error to the user - Configure your content filter policy — in Azure AI Foundry, adjust per-category thresholds for your deployment if legitimate prompts are being blocked
- Wrap in a graceful service layer — translate content filter errors into user-friendly responses without leaking policy configuration details
- Handle SK streaming silent failures — check
FinishReasonon every streaming chunk, because Semantic Kernel stops the stream without throwing when content is filtered mid-response
Root Cause: Content Management Categories
Azure OpenAI evaluates all prompts and completions against four content categories:
| Category | What It Covers | Severity Levels |
|---|---|---|
| Hate | Hate speech, discrimination, prejudice | safe / low / medium / high |
| Self-harm | Self-harm instructions or glorification | safe / low / medium / high |
| Sexual | Sexually explicit content | safe / low / medium / high |
| Violence | Violent instructions or glorification | safe / low / medium / high |
A "filtered": true on any single category blocks the entire request. The default policy blocks content at medium and high severity levels. Low severity content passes through by default, though this is configurable per deployment.
Both the prompt (what you send) and the completion (what the model returns) are independently filtered. The JSON error body indicates which direction triggered the block — prompt filtering typically includes a prompt_filter_results field alongside the content_filter_result.
Fix 1: Reading Content Filter Details from the Exception
The correct exception type to catch is ClientResultException from Azure.Core, not RequestFailedException. Parse the raw response body with System.Text.Json to extract per-category details.
For structured observability on what gets logged here, see the OpenTelemetry for AI Applications in .NET guide — token-level tracing integrates well with content filter audit logging.
using Azure;
using Azure.AI.OpenAI;
using OpenAI.Chat;
using System.Text.Json;
ChatClient chatClient = azureOpenAIClient.GetChatClient("my-gpt4o-deployment");
try
{
ChatCompletion completion = await chatClient.CompleteChatAsync(
new UserChatMessage(userPrompt));
return completion.Content[0].Text;
}
catch (ClientResultException ex) when (ex.Status == 400)
{
var responseBody = ex.GetRawResponse()?.Content.ToString();
if (responseBody?.Contains("content_filter") == true)
{
// Parse to find which category was triggered
using var doc = JsonDocument.Parse(responseBody);
var filterResult = doc.RootElement
.GetProperty("error")
.GetProperty("innererror")
.GetProperty("content_filter_result");
// Log which category triggered the block
foreach (var category in filterResult.EnumerateObject())
{
if (category.Value.GetProperty("filtered").GetBoolean())
{
_logger.LogWarning(
"Content filter blocked request. Category: {Category}, Severity: {Severity}",
category.Name,
category.Value.GetProperty("severity").GetString());
}
}
throw new ContentPolicyException("Your message was blocked by content policy.", ex);
}
throw;
}
ContentPolicyException is your own custom exception — there is no built-in one in the Azure SDK. Define it as:
public class ContentPolicyException(string message, Exception inner) : Exception(message, inner) { }
Never propagate ContentFilterException or similar names that you might see in older documentation. The SDK does not expose a typed content filter exception. Always parse the raw response body.
Fix 2: Configuring Content Filter Policies in Azure AI Foundry
If your content filter policy is blocking legitimate prompts, adjust the thresholds per deployment:
- Go to Azure AI Foundry (formerly Azure OpenAI Studio)
- Select your Azure OpenAI resource
- Navigate to Safety + Security > Content Filters
- Click Create content filter
- Adjust category thresholds separately for the Prompt and Completion directions
- Assign the new policy to your specific deployment under Deployments
Each category can be configured independently. For example, a medical education platform might need higher thresholds for self-harm content that is clinically relevant, while keeping hate content filtering at the default.
One hard constraint: the self-harm category cannot be fully disabled in standard Azure subscriptions regardless of policy configuration. This is intentional and applies to all tiers. Microsoft’s responsible AI policies mandate minimum protections across all Azure OpenAI deployments.
Fix 3: Graceful C# Handling Pattern
A service wrapper that catches, logs, and translates content filter errors keeps the pattern consistent across your codebase and prevents accidental leakage of policy details to clients:
public class SafeChatService
{
private readonly ChatClient _chatClient;
private readonly ILogger<SafeChatService> _logger;
public SafeChatService(ChatClient chatClient, ILogger<SafeChatService> logger)
{
_chatClient = chatClient;
_logger = logger;
}
public async Task<Result<string>> CompleteSafelyAsync(
string userMessage,
CancellationToken ct = default)
{
try
{
var completion = await _chatClient.CompleteChatAsync(
new UserChatMessage(userMessage), cancellationToken: ct);
return Result.Ok(completion.Content[0].Text);
}
catch (ClientResultException ex) when (ex.Status == 400 &&
ex.GetRawResponse()?.Content.ToString()?.Contains("content_filter") == true)
{
_logger.LogWarning("Content filter blocked request for session");
return Result.Fail("Your message was flagged by our content safety system. Please rephrase your request.");
}
}
}
Result<T> here represents a simple discriminated union — use FluentResults or your own. The pattern is what matters: the caller receives a typed failure result rather than an unhandled exception, and the raw Azure error stays server-side.
Fix 4: Handling Streaming Silent Failures in Semantic Kernel
This is the trickiest scenario. When Azure OpenAI’s content filter blocks a response during streaming, Semantic Kernel (as of 1.x) stops the async stream without throwing an exception. Your await foreach loop simply ends early, leaving you with a truncated or empty response and no error signal.
This behaviour is tracked in SK Issue #5289. Until a fix lands in the SDK, guard every streaming loop with a FinishReason check:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
var chatHistory = new ChatHistory("You are a helpful assistant.");
chatHistory.AddUserMessage(userMessage);
var chatService = kernel.GetRequiredService<IChatCompletionService>();
var fullResponse = new StringBuilder();
bool contentFiltered = false;
await foreach (var chunk in chatService.GetStreamingChatMessageContentsAsync(
chatHistory, kernel: kernel))
{
if (chunk.Metadata?.TryGetValue("FinishReason", out var finishReason) == true &&
finishReason?.ToString() == "ContentFilter")
{
contentFiltered = true;
break;
}
fullResponse.Append(chunk.Content);
Console.Write(chunk.Content);
}
if (contentFiltered)
{
Console.WriteLine("\n[Response blocked by content safety policy]");
}
The key is checking chunk.Metadata["FinishReason"] on every iteration before appending content. Do not assume that a completed await foreach means the response was clean. Check explicitly on every chunk.
For non-streaming Semantic Kernel invocations, wrap kernel.InvokePromptAsync in a try/catch for HttpOperationException — SK wraps the underlying ClientResultException in its own exception type for non-streaming calls.
Prevention Patterns
The best content filter handling is avoiding unnecessary filter triggers in the first place. These patterns complement the general error handling approach covered in Fix Azure OpenAI 429 Too Many Requests in .NET.
-
Pre-screen user input with Azure AI Content Safety. The
Azure.AI.ContentSafetypackage lets you analyze text before forwarding it to Azure OpenAI. CallcontentSafetyClient.AnalyzeTextAsync()on the user’s prompt first. This surfaces a clear, controlled error to the user rather than a confusing 400 from the AI endpoint. -
Keep system prompts minimal. Long system prompts consume content filter budget. A system prompt that includes detailed descriptions of harmful scenarios — even with the intent to instruct the model to avoid them — can itself trigger content filtering. Describe desired behaviour, not forbidden content.
-
Test your content filter policy with controlled prompts. Before deploying a policy change, test it with prompts that span the severity spectrum for each category. Azure AI Foundry has a built-in playground for this. Log the results so you have a baseline to compare against after policy changes.
-
Log blocked categories, not blocked content. When a filter triggers, log the category name and severity, not the user’s original message. The original message may itself contain the harmful content — you do not want it in your application logs.
Further Reading
- Azure OpenAI content filtering documentation
- Azure AI Content Safety SDK on NuGet
- Azure.AI.OpenAI on NuGet