Azure AI Content Safety Container Plugin is now live on Dify Plugin Marketplace! This content moderation tool automatically detects and filters harmful content in user inputs and AI outputs, keeping your apps safe and compliant.
With this plugin, Dify users get:
Real-time moderation: Auto-detect inappropriate content in text and images within workflows
Multi-layered protection: Covers hate speech, violence, porn, self-harm, and more
Flexible deployment: Run locally in containers to meet privacy and compliance needs
Custom filtering: Use BlockList to precisely control what gets filtered
Whether you're building chatbots, content generators, or social platforms, this plugin adds reliable safety protection to your AI apps.
1. Azure AI Content Safety
1.1. Overview
Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful.

1.2. Harm categories
Content Safety recognizes four distinct categories of objectionable content. Classification can be multi-labeled. For example, when a text sample goes through the text moderation model, it could be classified as both Sexual content and Violence.

1.3. Severity levels
Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
Text: The current version of the text model supports the full 0-7 severity scale. The classifier detects among all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
Image: The current version of the image model supports the trimmed version of the full 0-7 severity scale. The classifier only returns severities 0, 2, 4, and 6.
Image with text: The current version of the multimodal model supports the full 0-7 severity scale. The classifier detects among all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
1.4. Azure AI Content Safety Containers
Containers let you use a subset of the Azure AI Content Safety features in your own environment. With content safety containers, you can build a content safety application architecture optimized for both robust cloud capabilities and edge locality. Containers help you meet specific security and data governance requirements.
The following table lists the content safety containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.

1.5. Pricing
Azure AI Content Safety offers two types of SKUs. The first is the Standard SKU
, which charges based on the actual number of text and image requests. The second is the Commitment SKU
, which is deployed fully on-premises via containers, does not interact with the Internet, and is charged based on a commitment.
Additionally, for the Standard SKU
, containerized on-premises deployment is also supported; however, it requires interaction with the cloud Billing Endpoint to periodically synchronize billing data.

1.6. Azure AI Content Safety Containers for Dify Plugin
The Azure AI Content Safety Container has now been packaged as a Dify plugin and is officially listed on the Dify Marketplace. This plugin offers content moderation capabilities for both TEXT and IMAGE types. The latest version, v0.0.2, includes the following features:
Unified Moderation: Analyze both text and images in a single tool.
Custom Configuration: Support for custom API endpoints and optional authentication headers.
Text Blocklists: Utilize blocklists for more precise text content filtering.
Combined Results: Get a single, structured result summarizing findings from both text and image analysis.
Clear Decisions: Outputs a clear
ALLOW
orDENY
check result.Detailed & Formatted Output: Provides formatted violation details.
Raw Data Access: Includes a
RawResults
output with the original JSON for advanced use cases.

2. Install and Run Containers
2.1. Prerequisites
You must meet the following prerequisites before you use content safety containers.
An Azure subscription.
A content safety service resource with the standard (S) pricing tier.
2.2. Billing Arguments
Content safety containers aren't licensed to run without being connected to Azure for metering. You must configure your container to always communicate billing information with the metering service.
Three primary parameters for all Azure AI containers are required. The Microsoft Software License Terms must be present with a value of accept. An endpoint URL and API key are also needed.
Queries to the container are billed at the pricing tier of the Azure resource that's used for the ApiKey
parameter.
The docker run
command starts the container when all three of the following options are provided with valid values:

NOTE
The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests.
2.3. Host requirements and recommendations
The host is an x64-based computer that runs the Docker container. The following table describes the minimum and recommended specifications for the content safety containers. It applies to both text and image container.

Content safety containers require NVIDIA CUDA for optimal performance. The container is tested on CUDA 11.8 and CUDA 12.6. The minimum GPU requirement for these containers is NVIDIA's T4, however, we recommend using the A100 for optimal performance.
2.4. Performance
Even with identical GPUs, performance can fluctuate based on the GPU load and the specific configuration of the environment. The benchmark data we provide should be used as a reference point when considering the deployment of content safety containers in your environment. For the most accurate assessment, we recommend conducting tests within your specific environment.
Analyze Text

Analyze Image

2.5. Steps
NOTE
All of the following steps are performed on Ubuntu 24.04 with the security type set to Trusted Launch virtual machines.
Install Docker
Install NVIDIA Driver
If you select Trusted launch virtual machines as the Security Type when creating an Azure VM, you must use the official ubuntu-drivers
tool provided by Ubuntu to install the NVIDIA driver. Otherwise, the driver will not be loaded into the kernel, and the system will fail to detect the GPU driver.
Install CUDA Toolkit
As mentioned above, the Content Safety Container has been tested on both CUDA 11.8 and 12.6. It is recommended to install CUDA 12.6. Please follow the steps below:
Install NVIDIA Container Toolkit
The NVIDIA Container Toolkit is a collection of libraries and utilities enabling users to build and run GPU-accelerated containers.
Get Azure Content Safety
apikey
andEndpoint
Azure Portal: Home / Content Safety / <your_content_safety_resource> / Resource Management / Key and Endpoint

Install content safety container
The content safety analyze text container image for all supported versions can be found on the Microsoft Container Registry (MCR) syndicate. It resides within the azure-cognitive-services/contentsafety
repository and is named text-analyze
and image-analyze
.
Verify
There are several ways to validate that the container is running. Locate the External IP address and exposed port of the container in question, and open your preferred web browser. Use the various request URLs that follow to validate the container is running.

You can also view logs related to usage reporting and billing data in the container logs, as shown below:
Text Analyze
Image Analyze
Using Block List
The analyze text container supports the use of a blocklist feature, which allows you to block custom terms. You, as the customer, have the ability to manage these blocklists by using CSV files. You have the flexibility to use multiple CSV files for multiple blocklists.
In the command above, replace {/path/on/host}
with the path to the blocklist folder on your host machine. This command mounts the blocklist directory from your host machine to the BLOCKLIST_DIR=/tmp/blocklist
environment variable within the container.
NOTE
The analyze text container uses an exact match method for the blocklist. All items in the blocklist will be converted to lowercase before the matching process. This means, for instance, if you have Contoso
in your blocklist, both "Contoso" and "contoso" from your input are considered a match.
3. Quickstart with Analyze API
3.1. API Call
Sample Code - REST API
Parameters
Text Analyze
Image Analyze
3.2. Output
Samples
Parameters

4. Using Azure AI Content Safety in Dify
4.1. Prerequisites
Deploy Azure AI Content Safety Container
Before using this plugin, make sure you have an Azure AI Content Safety Container properly set up and running. See Install and run content safety containers with Docker for setup instructions. Please verify that your container is accessible and responding to API requests before configuring this plugin.
Update Dify ENV
When users send images to the chatbox, url
that can be used to access the image will be generated in sys.files
(each image corresponds to one url). The image moderation tool obtains the image by accessing these url
, converts it to base64, and then sends it to the Image Analyze API for review. Therefore, the correct FILES_URL
or CONSOLE_API_URL
must be set in order to generate a corresponding accessible url. Generally, this should be consistent with the main domain name used to access the Dify Portal.
The structure of sys.files
is as follows:
4.2. Steps
Get Azure AI Content Safety Container Tools
Azure AI Content Safety Container can be installed via Plugin Marketplace, GitHub or Local Package File. Please choose the installation method that best suits your needs. If you are installing via Local Package File, please set FORCE_VERIFYING_SIGNATURE=false
for the plugin-daemon
component.
Authentication
On the Dify navigation page, go to [Tools] > [Azure AI Content Safety Container] > [To Authorize] to fill in the API Endpoint, API Version and optional headers.


For example:
API Endpoint:
https://xxx.azure-api.net
API Version:
2024-05-01
Custom Header Key:
Ocp-Apim-Subscription-Key
Custom Header Value:
*******************************
Using the tool
You can use this tool in Chatflow or Workflow. The tool accepts both text and image inputs.
Parameters:
Text to Analyze
: The text content to analyze.Images to Analyze
: The image files to analyze.Text Blocklist Names
: Comma-separated list of blocklist names for text analysis.Halt on Blocklist Hit
: Whether to stop text analysis if a blocklist item is matched.
Image Requirements:
Maximum size: 7,200 x 7,200 pixels
Maximum file size: 4 MB
Minimum size: 50 x 50 pixels
Text Requirements:
Default maximum length: 10K characters (split longer texts as needed)
All parameters are optional. The tool automatically detects when Text or Image inputs are provided (non-empty) and calls the corresponding APIs for content moderation accordingly.


4.3. Output Variables
The tool provides several output variables for use in your workflow:
CheckResult
: The final decision (ALLOW
,DENY
, orERROR
).Details
: A user-friendly, formatted string explaining the violations (only present ifCheckResult
isDENY
).RawResults
: A JSON object containing the raw, unmodified responses from the Azure APIs. This is useful for custom parsing or logging.
Example RawResults
structure:
4.4. Examples
Example 1: Text Moderation – Harmful Category

Example 2: Text Moderation – Using Block List

Example 3: Image Moderation – Harmful Category (Single Image, Multiple Images)


Example 4: Text and Image Moderation with Block List

5. Reference
https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview
https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/harm-categories
https://learn.microsoft.com/en-us/azure/ai-services/content-safety/how-to/containers/text-container
https://docs.azure.cn/en-us/virtual-machines/linux/n-series-driver-setup#ubuntu
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#id7
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
https://github.com/HeyJiqingCode/AzureAIContentSafetyContainer-DifyPlugin
https://github.com/prompt-security/ps-fuzz/tree/main/ps_fuzz/attack_data
https://github.com/alex000kim/nsfw_data_scraper/tree/main/raw_data