Skip to main content

Fix Azure OpenAI Content Filter Error in C#

From GitHub Issue .NET 9 Azure.AI.OpenAI 2.1.0 Microsoft.SemanticKernel 1.54.0
By Rajesh Mishra · Mar 21, 2026 · 9 min read
Verified Mar 2026 .NET 9 Azure.AI.OpenAI 2.1.0
In 30 Seconds

Azure OpenAI returns a 400 content_filter error when input or output violates configured content management policies. In C#, catch ClientResultException from Azure.Core and parse ex.GetRawResponse()?.Content.ToString() to read category-level filter details. Configure policies in Azure AI Foundry. For SK streaming, the stream stops silently on content filter — check FinishReason on each chunk. Pre-screen prompts with Azure AI Content Safety to provide better UX.

⚠️
Error Fix Guide

Root cause analysis and verified fix. Code examples use Azure.AI.OpenAI 2.1.0.

✓ SOLVED

The Error

Your .NET application calls Azure OpenAI and receives this response:

Azure.RequestFailedException: 400 (Bad Request)

Content:
{
  "error": {
    "code": "content_filter",
    "message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy.",
    "status": 400,
    "innererror": {
      "code": "ResponsibleAIPolicyViolation",
      "content_filter_result": {
        "hate": { "filtered": true, "severity": "medium" },
        "self_harm": { "filtered": false, "severity": "safe" },
        "sexual": { "filtered": false, "severity": "safe" },
        "violence": { "filtered": false, "severity": "safe" }
      }
    }
  }
}

This is an HTTP 400 Bad Request — not a server error. The model did not fail; it refused. Azure OpenAI’s content management system evaluated your prompt or the model’s response against its configured policies and decided to block it. One category with "filtered": true is all it takes for the entire request to be rejected.

The inner error code ResponsibleAIPolicyViolation is the SDK-level identifier for this class of error. The specific content_filter_result block tells you exactly which content category triggered the block and at what severity level.

Fixes at a Glance

  1. Read the raw response body — catch ClientResultException, parse the JSON to identify which category was triggered, and log it server-side before returning a clean error to the user
  2. Configure your content filter policy — in Azure AI Foundry, adjust per-category thresholds for your deployment if legitimate prompts are being blocked
  3. Wrap in a graceful service layer — translate content filter errors into user-friendly responses without leaking policy configuration details
  4. Handle SK streaming silent failures — check FinishReason on every streaming chunk, because Semantic Kernel stops the stream without throwing when content is filtered mid-response

Root Cause: Content Management Categories

Azure OpenAI evaluates all prompts and completions against four content categories:

CategoryWhat It CoversSeverity Levels
HateHate speech, discrimination, prejudicesafe / low / medium / high
Self-harmSelf-harm instructions or glorificationsafe / low / medium / high
SexualSexually explicit contentsafe / low / medium / high
ViolenceViolent instructions or glorificationsafe / low / medium / high

A "filtered": true on any single category blocks the entire request. The default policy blocks content at medium and high severity levels. Low severity content passes through by default, though this is configurable per deployment.

Both the prompt (what you send) and the completion (what the model returns) are independently filtered. The JSON error body indicates which direction triggered the block — prompt filtering typically includes a prompt_filter_results field alongside the content_filter_result.

Fix 1: Reading Content Filter Details from the Exception

The correct exception type to catch is ClientResultException from Azure.Core, not RequestFailedException. Parse the raw response body with System.Text.Json to extract per-category details.

For structured observability on what gets logged here, see the OpenTelemetry for AI Applications in .NET guide — token-level tracing integrates well with content filter audit logging.

using Azure;
using Azure.AI.OpenAI;
using OpenAI.Chat;
using System.Text.Json;

ChatClient chatClient = azureOpenAIClient.GetChatClient("my-gpt4o-deployment");

try
{
    ChatCompletion completion = await chatClient.CompleteChatAsync(
        new UserChatMessage(userPrompt));

    return completion.Content[0].Text;
}
catch (ClientResultException ex) when (ex.Status == 400)
{
    var responseBody = ex.GetRawResponse()?.Content.ToString();

    if (responseBody?.Contains("content_filter") == true)
    {
        // Parse to find which category was triggered
        using var doc = JsonDocument.Parse(responseBody);
        var filterResult = doc.RootElement
            .GetProperty("error")
            .GetProperty("innererror")
            .GetProperty("content_filter_result");

        // Log which category triggered the block
        foreach (var category in filterResult.EnumerateObject())
        {
            if (category.Value.GetProperty("filtered").GetBoolean())
            {
                _logger.LogWarning(
                    "Content filter blocked request. Category: {Category}, Severity: {Severity}",
                    category.Name,
                    category.Value.GetProperty("severity").GetString());
            }
        }

        throw new ContentPolicyException("Your message was blocked by content policy.", ex);
    }

    throw;
}

ContentPolicyException is your own custom exception — there is no built-in one in the Azure SDK. Define it as:

public class ContentPolicyException(string message, Exception inner) : Exception(message, inner) { }

Never propagate ContentFilterException or similar names that you might see in older documentation. The SDK does not expose a typed content filter exception. Always parse the raw response body.

Fix 2: Configuring Content Filter Policies in Azure AI Foundry

If your content filter policy is blocking legitimate prompts, adjust the thresholds per deployment:

  1. Go to Azure AI Foundry (formerly Azure OpenAI Studio)
  2. Select your Azure OpenAI resource
  3. Navigate to Safety + Security > Content Filters
  4. Click Create content filter
  5. Adjust category thresholds separately for the Prompt and Completion directions
  6. Assign the new policy to your specific deployment under Deployments

Each category can be configured independently. For example, a medical education platform might need higher thresholds for self-harm content that is clinically relevant, while keeping hate content filtering at the default.

One hard constraint: the self-harm category cannot be fully disabled in standard Azure subscriptions regardless of policy configuration. This is intentional and applies to all tiers. Microsoft’s responsible AI policies mandate minimum protections across all Azure OpenAI deployments.

Fix 3: Graceful C# Handling Pattern

A service wrapper that catches, logs, and translates content filter errors keeps the pattern consistent across your codebase and prevents accidental leakage of policy details to clients:

public class SafeChatService
{
    private readonly ChatClient _chatClient;
    private readonly ILogger<SafeChatService> _logger;

    public SafeChatService(ChatClient chatClient, ILogger<SafeChatService> logger)
    {
        _chatClient = chatClient;
        _logger = logger;
    }

    public async Task<Result<string>> CompleteSafelyAsync(
        string userMessage,
        CancellationToken ct = default)
    {
        try
        {
            var completion = await _chatClient.CompleteChatAsync(
                new UserChatMessage(userMessage), cancellationToken: ct);
            return Result.Ok(completion.Content[0].Text);
        }
        catch (ClientResultException ex) when (ex.Status == 400 &&
            ex.GetRawResponse()?.Content.ToString()?.Contains("content_filter") == true)
        {
            _logger.LogWarning("Content filter blocked request for session");
            return Result.Fail("Your message was flagged by our content safety system. Please rephrase your request.");
        }
    }
}

Result<T> here represents a simple discriminated union — use FluentResults or your own. The pattern is what matters: the caller receives a typed failure result rather than an unhandled exception, and the raw Azure error stays server-side.

Fix 4: Handling Streaming Silent Failures in Semantic Kernel

This is the trickiest scenario. When Azure OpenAI’s content filter blocks a response during streaming, Semantic Kernel (as of 1.x) stops the async stream without throwing an exception. Your await foreach loop simply ends early, leaving you with a truncated or empty response and no error signal.

This behaviour is tracked in SK Issue #5289. Until a fix lands in the SDK, guard every streaming loop with a FinishReason check:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

var chatHistory = new ChatHistory("You are a helpful assistant.");
chatHistory.AddUserMessage(userMessage);

var chatService = kernel.GetRequiredService<IChatCompletionService>();
var fullResponse = new StringBuilder();
bool contentFiltered = false;

await foreach (var chunk in chatService.GetStreamingChatMessageContentsAsync(
    chatHistory, kernel: kernel))
{
    if (chunk.Metadata?.TryGetValue("FinishReason", out var finishReason) == true &&
        finishReason?.ToString() == "ContentFilter")
    {
        contentFiltered = true;
        break;
    }

    fullResponse.Append(chunk.Content);
    Console.Write(chunk.Content);
}

if (contentFiltered)
{
    Console.WriteLine("\n[Response blocked by content safety policy]");
}

The key is checking chunk.Metadata["FinishReason"] on every iteration before appending content. Do not assume that a completed await foreach means the response was clean. Check explicitly on every chunk.

For non-streaming Semantic Kernel invocations, wrap kernel.InvokePromptAsync in a try/catch for HttpOperationException — SK wraps the underlying ClientResultException in its own exception type for non-streaming calls.

Prevention Patterns

The best content filter handling is avoiding unnecessary filter triggers in the first place. These patterns complement the general error handling approach covered in Fix Azure OpenAI 429 Too Many Requests in .NET.

  1. Pre-screen user input with Azure AI Content Safety. The Azure.AI.ContentSafety package lets you analyze text before forwarding it to Azure OpenAI. Call contentSafetyClient.AnalyzeTextAsync() on the user’s prompt first. This surfaces a clear, controlled error to the user rather than a confusing 400 from the AI endpoint.

  2. Keep system prompts minimal. Long system prompts consume content filter budget. A system prompt that includes detailed descriptions of harmful scenarios — even with the intent to instruct the model to avoid them — can itself trigger content filtering. Describe desired behaviour, not forbidden content.

  3. Test your content filter policy with controlled prompts. Before deploying a policy change, test it with prompts that span the severity spectrum for each category. Azure AI Foundry has a built-in playground for this. Log the results so you have a baseline to compare against after policy changes.

  4. Log blocked categories, not blocked content. When a filter triggers, log the category name and severity, not the user’s original message. The original message may itself contain the harmful content — you do not want it in your application logs.

Further Reading

⚠ Production Considerations

  • Do not expose the raw content filter JSON response to end users. It contains category severity levels that reveal information about your prompt policy configuration. Log it server-side and return a generic 'content policy violation' message to the client.
  • Configuring content filter policies to 'allow' certain categories in Azure AI Foundry requires the appropriate Azure subscription tier. If your subscription does not support policy modification, all category thresholds are fixed at Microsoft defaults and cannot be changed from the API side.

Enjoying this article?

Get weekly .NET + AI insights delivered to your inbox. No spam.

Subscribe Free →

🧠 Architect’s Note

Treat content filter violations as a UX signal, not just an error. Log the blocked category and user session context. If a legitimate use case repeatedly triggers filters, that is data for a content policy review — not something to silently retry around.

AI-Friendly Summary

Summary

Azure OpenAI returns a 400 content_filter error when input or output violates configured content management policies. In C#, catch ClientResultException from Azure.Core and parse ex.GetRawResponse()?.Content.ToString() to read category-level filter details. Configure policies in Azure AI Foundry. For SK streaming, the stream stops silently on content filter — check FinishReason on each chunk. Pre-screen prompts with Azure AI Content Safety to provide better UX.

Key Takeaways

  • Content filter errors return HTTP 400 with error code 'content_filter' — not a 5xx error
  • Parse ex.GetRawResponse()?.Content.ToString() to read per-category severity details
  • Configure content filter policies per-deployment in Azure AI Foundry Safety + Security
  • SK streaming stops silently on content filter — always check FinishReason on each chunk
  • Pre-screen with Azure AI Content Safety SDK for better user-facing error messages

Implementation Checklist

  • Catch ClientResultException with Status == 400 and check for 'content_filter' error code in JSON
  • Log the full raw response body to understand which category triggered the block
  • Review your Azure AI Foundry content filter policy and adjust thresholds if legitimate prompts are blocked
  • Add Azure AI Content Safety pre-screening for user-generated input
  • Add FinishReason.ContentFilter detection to all SK streaming loops
  • Test with controlled prompts that trigger each filter category in your development environment

Frequently Asked Questions

What HTTP status code does Azure OpenAI return for content filter violations?

Azure OpenAI returns HTTP 400 Bad Request with an error code of 'content_filter' when the content management policy blocks a prompt or completion. The inner error includes category-level details (hate, sexual, violence, self-harm) and severity levels (safe, low, medium, high).

How do I read content filter details from a ClientResultException in C#?

Catch ClientResultException (from Azure.Core), check Status == 400, then call ex.GetRawResponse()?.Content.ToString() to get the JSON body. Parse it with System.Text.Json to extract the inner_error.content_filter_result field with per-category severity scores.

Can I configure Azure OpenAI content filter policies to be less restrictive?

Yes. In Azure AI Foundry (formerly Azure OpenAI Studio), navigate to Safety + Security > Content Filters and create or modify a content filter policy. You can adjust thresholds per category. Some categories (self-harm) cannot be disabled regardless of tier.

Why does Semantic Kernel streaming sometimes stop without throwing an exception?

This is a known issue (SK Issue #5289). When Azure OpenAI's content filter blocks a streaming response mid-stream, the stream ends silently without throwing. You must check the FinishReason of each StreamingChatMessageContent chunk — if it is ContentFilter, the content was blocked.

How do I pre-screen prompts before sending to Azure OpenAI in C#?

Use Azure AI Content Safety SDK (Azure.AI.ContentSafety package). Call contentSafetyClient.AnalyzeTextAsync() on the user's prompt before forwarding to Azure OpenAI. This lets you surface a clear error to the user instead of receiving a confusing 400 from the AI endpoint.

What is the difference between prompt content filtering and completion content filtering?

Prompt filtering analyzes the input message you send to Azure OpenAI. Completion filtering analyzes the model's output before returning it to you. Both can trigger a 400 content_filter error. The JSON error body specifies which was triggered via the 'innererror.content_filter_result' field.

Does content filter error handling differ between Azure.AI.OpenAI SDK and Semantic Kernel?

For the Azure.AI.OpenAI SDK, catch ClientResultException and parse the raw response. In Semantic Kernel, wrap kernel.InvokePromptAsync or InvokeStreamingAsync in try/catch for HttpOperationException (SK wraps the underlying Azure exception). For streaming in SK, also check chunk.FinishReason after each iteration.

You Might Also Enjoy

Was this article useful?

Feedback is anonymous and helps us improve content quality.

Discussion

Engineering discussion powered by GitHub Discussions.

#Azure OpenAI #Content Filter #Error Fix #.NET AI #Responsible AI