How to

Build secure AI apps on Dify with Azure AI Content Safety Container Plugin

The Azure AI Content Safety Container Plugin brings seamless, real-time moderation to Dify apps, filtering harmful text and images with customizable controls. It offers robust, private deployment options and clear, actionable results to keep your platforms safe and compliant.

Jiqing You

Sr Technical Specialist, AI Apps | Dify Contributor

Written on

Jul 1, 2025

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

How to

·

Jul 1, 2025

Build secure AI apps on Dify with Azure AI Content Safety Container Plugin

The Azure AI Content Safety Container Plugin brings seamless, real-time moderation to Dify apps, filtering harmful text and images with customizable controls. It offers robust, private deployment options and clear, actionable results to keep your platforms safe and compliant.

Jiqing You

Sr Technical Specialist, AI Apps | Dify Contributor

Share to Twitter
Share to LinkedIn
Share to Hacker News

How to

Build secure AI apps on Dify with Azure AI Content Safety Container Plugin

The Azure AI Content Safety Container Plugin brings seamless, real-time moderation to Dify apps, filtering harmful text and images with customizable controls. It offers robust, private deployment options and clear, actionable results to keep your platforms safe and compliant.

Jiqing You

Sr Technical Specialist, AI Apps | Dify Contributor

Written on

Jul 1, 2025

Share

Share to Twitter
Share to LinkedIn
Share to Hacker News

How to

·

Jul 1, 2025

Build secure AI apps on Dify with Azure AI Content Safety Container Plugin

Share to Twitter
Share to LinkedIn
Share to Hacker News

How to

·

Jul 1, 2025

Build secure AI apps on Dify with Azure AI Content Safety Container Plugin

Share to Twitter
Share to LinkedIn
Share to Hacker News

Azure AI Content Safety Container Plugin is now live on Dify Plugin Marketplace! This content moderation tool automatically detects and filters harmful content in user inputs and AI outputs, keeping your apps safe and compliant.

With this plugin, Dify users get:

  • Real-time moderation: Auto-detect inappropriate content in text and images within workflows

  • Multi-layered protection: Covers hate speech, violence, porn, self-harm, and more

  • Flexible deployment: Run locally in containers to meet privacy and compliance needs

  • Custom filtering: Use BlockList to precisely control what gets filtered

Whether you're building chatbots, content generators, or social platforms, this plugin adds reliable safety protection to your AI apps.

1. Azure AI Content Safety

1.1. Overview

Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful.

1.2. Harm categories

Content Safety recognizes four distinct categories of objectionable content. Classification can be multi-labeled. For example, when a text sample goes through the text moderation model, it could be classified as both Sexual content and Violence.

1.3. Severity levels

Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.

  • Text: The current version of the text model supports the full 0-7 severity scale. The classifier detects among all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.

  • Image: The current version of the image model supports the trimmed version of the full 0-7 severity scale. The classifier only returns severities 0, 2, 4, and 6.

  • Image with text: The current version of the multimodal model supports the full 0-7 severity scale. The classifier detects among all severities along this scale. If the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.

1.4. Azure AI Content Safety Containers

Containers let you use a subset of the Azure AI Content Safety features in your own environment. With content safety containers, you can build a content safety application architecture optimized for both robust cloud capabilities and edge locality. Containers help you meet specific security and data governance requirements.

The following table lists the content safety containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.

1.5. Pricing

Azure AI Content Safety offers two types of SKUs. The first is the Standard SKU, which charges based on the actual number of text and image requests. The second is the Commitment SKU, which is deployed fully on-premises via containers, does not interact with the Internet, and is charged based on a commitment.

Additionally, for the Standard SKU, containerized on-premises deployment is also supported; however, it requires interaction with the cloud Billing Endpoint to periodically synchronize billing data.

1.6. Azure AI Content Safety Containers for Dify Plugin

The Azure AI Content Safety Container has now been packaged as a Dify plugin and is officially listed on the Dify Marketplace. This plugin offers content moderation capabilities for both TEXT and IMAGE types. The latest version, v0.0.2, includes the following features:

  • Unified Moderation: Analyze both text and images in a single tool.

  • Custom Configuration: Support for custom API endpoints and optional authentication headers.

  • Text Blocklists: Utilize blocklists for more precise text content filtering.

  • Combined Results: Get a single, structured result summarizing findings from both text and image analysis.

  • Clear Decisions: Outputs a clear ALLOW or DENY check result.

  • Detailed & Formatted Output: Provides formatted violation details.

  • Raw Data Access: Includes a RawResults output with the original JSON for advanced use cases.

2. Install and Run Containers

2.1. Prerequisites

You must meet the following prerequisites before you use content safety containers.

  • An Azure subscription.

  • A content safety service resource with the standard (S) pricing tier.

2.2. Billing Arguments

Content safety containers aren't licensed to run without being connected to Azure for metering. You must configure your container to always communicate billing information with the metering service.

Three primary parameters for all Azure AI containers are required. The Microsoft Software License Terms must be present with a value of accept. An endpoint URL and API key are also needed.

Queries to the container are billed at the pricing tier of the Azure resource that's used for the ApiKey parameter.

The docker run  command starts the container when all three of the following options are provided with valid values:

NOTE

The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests.

2.3. Host requirements and recommendations

The host is an x64-based computer that runs the Docker container. The following table describes the minimum and recommended specifications for the content safety containers. It applies to both text and image container.

Content safety containers require NVIDIA CUDA for optimal performance. The container is tested on CUDA 11.8 and CUDA 12.6. The minimum GPU requirement for these containers is NVIDIA's T4, however, we recommend using the A100 for optimal performance.

2.4. Performance

Even with identical GPUs, performance can fluctuate based on the GPU load and the specific configuration of the environment. The benchmark data we provide should be used as a reference point when considering the deployment of content safety containers in your environment. For the most accurate assessment, we recommend conducting tests within your specific environment.

  1. Analyze Text

  1. Analyze Image

2.5. Steps

NOTE

All of the following steps are performed on Ubuntu 24.04 with the security type set to Trusted Launch virtual machines.

  1. Install Docker

# Update the packages list
apt update

# Install docker
apt install docker.io -y
  1. Install NVIDIA Driver

If you select Trusted launch virtual machines as the Security Type when creating an Azure VM, you must use the official ubuntu-drivers tool provided by Ubuntu to install the NVIDIA driver. Otherwise, the driver will not be loaded into the kernel, and the system will fail to detect the GPU driver.

# Check if secure boot enabled
mokutil --sb-state

# Install ubuntu-drivers
apt update && apt install -y ubuntu-drivers-common

# Check available versions(for servers)
ubuntu-drivers list --gpgpu

# Install driver
ubuntu-drivers install --gpgpu nvidia:570-server

# Check driver version
cat /proc/driver/nvidia/version

# install utils
apt install nvidia-utils-570-server

# Verify
nvidia-smi

# Upgrade&Clean(Optional)
ubuntu-drivers install --gpgpu

  1. Install CUDA Toolkit

As mentioned above, the Content Safety Container has been tested on both CUDA 11.8 and 12.6. It is recommended to install CUDA 12.6. Please follow the steps below:

# Download deb package
wget <https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb>

# Add nvidia repository key
dpkg -i cuda-keyring_1.1-1_all.deb

# Update the packages list
apt update

# Check available versions
apt search cuda-toolkit

# Install a specific version
apt -y install cuda-toolkit-12-6

# Add nvcc to PATH
echo 'export PATH=/usr/local/cuda-12.6/bin:$PATH' >> ~/.bashrc
source

  1. Install NVIDIA Container Toolkit

The NVIDIA Container Toolkit is a collection of libraries and utilities enabling users to build and run GPU-accelerated containers.

# Configure the production repository
curl -fsSL <https://nvidia.github.io/libnvidia-container/gpgkey> | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \\
&& curl -s -L <https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list> | \\
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \\
tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Update the packages list
apt update

# Install
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
apt-get install -y \\
nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \\
nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \\
libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \\
libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}

# Restart docker
systemctl restart docker

# Verify
docker run --rm --gpus


  1. Get Azure Content Safety apikey and Endpoint

Azure Portal: Home / Content Safety / <your_content_safety_resource> / Resource Management / Key and Endpoint

  1. Install content safety container

The content safety analyze text container image for all supported versions can be found on the Microsoft Container Registry (MCR) syndicate. It resides within the azure-cognitive-services/contentsafety repository and is named text-analyze and image-analyze .

# Install content safety container - Text
## On CPU VM(Skip step 1-3)
docker run -itd -p 5000:5000 \\
-e CUDA_ENABLED=false \\
--restart always \\
mcr.microsoft.com/azure-cognitive-services/contentsafety/text-analyze:latest \\
Eula=accept \\
Billing="<Endpoint>" \\
ApiKey="<ApiKey>"

## On GPU VM
docker run -itd -p 5000:5000 \\
--gpus all \\
--restart always \\
mcr.microsoft.com/azure-cognitive-services/contentsafety/text-analyze:latest \\
Eula=accept \\
Billing="<Endpoint>" \\
ApiKey="<ApiKey>"

# Install content safety container - Image
## GPU VM Only
docker run -itd -p 5000:5000 \\
--gpus all \\
--restart always \\
mcr.microsoft.com/azure-cognitive-services/contentsafety/image-analyze:latest \\
Eula=accept \\
Billing="<Endpoint>" \\
ApiKey="<ApiKey>"
  1. Verify

There are several ways to validate that the container is running. Locate the External IP address and exposed port of the container in question, and open your preferred web browser. Use the various request URLs that follow to validate the container is running.

You can also view logs related to usage reporting and billing data in the container logs, as shown below:

  • Text Analyze

  • Image Analyze

  1. Using Block List

The analyze text container supports the use of a blocklist feature, which allows you to block custom terms. You, as the customer, have the ability to manage these blocklists by using CSV files. You have the flexibility to use multiple CSV files for multiple blocklists.

## On CPU VM(Skip step 1-3)
docker run -itd -p 5000:5000 \\
-e CUDA_ENABLED=false \\
--restart always \\
-e BLOCKLIST_DIR=/tmp/blocklist \\
-v {/path/on/host}:/tmp/blocklist \\
mcr.microsoft.com/azure-cognitive-services/contentsafety/text-analyze:latest \\
Eula=accept \\
Billing="<Endpoint>" \\
ApiKey="<ApiKey>"

# On GPU VM
docker run -itd -p 5000:5000 \\
--gpus all \\
--restart always \\
-e BLOCKLIST_DIR=/tmp/blocklist \\
-v {/path/on/host}:/tmp/blocklist \\
mcr.microsoft.com/azure-cognitive-services/contentsafety/text-analyze:latest \\
Eula=accept \\
Billing="<Endpoint>" \\
ApiKey="<ApiKey>"

In the command above, replace {/path/on/host} with the path to the blocklist folder on your host machine. This command mounts the blocklist directory from your host machine to the BLOCKLIST_DIR=/tmp/blocklist environment variable within the container.

NOTE

The analyze text container uses an exact match method for the blocklist. All items in the blocklist will be converted to lowercase before the matching process. This means, for instance, if you have Contoso in your blocklist, both "Contoso" and "contoso" from your input are considered a match.

3. Quickstart with Analyze API

3.1. API Call

  1. Sample Code - REST API

# Text Analyze
curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2024-09-01' \\
--header 'Ocp-Apim-Subscription-Key: <your_subscription_key>' \\
--header 'Content-Type: application/json' \\
--data-raw '{
  "text": "I hate you",
  "categories": ["Hate", "Sexual", "SelfHarm", "Violence"],
  "blocklistNames": ["string"],
  "haltOnBlocklistHit": true,
  "outputType": "EightSeverityLevels"
}'

# Image Analyze
curl --location --request POST '<endpoint>/contentsafety/image:analyze?api-version=2024-09-01' \\
--header 'Content-Type: application/json' \\
--data-raw '{
  "image": {
    "content": "base_64_string"
  },
  "categories": ["Hate", "SelfHarm", "Sexual", "Violence"],
  "outputType": "FourSeverityLevels"
}'
  1. Parameters

  • Text Analyze

  • Image Analyze

3.2. Output

  1. Samples

# Text Analyze
{
  "blocklistsMatch": [
    {
      "blocklistName": "string",
      "blocklistItemId": "string",
      "blocklistItemText": "string"
    }
  ],
  "categoriesAnalysis": [
    {
      "category": "Hate",
      "severity": 2
    },
    {
      "category": "SelfHarm",
      "severity": 0
    },
    {
      "category": "Sexual",
      "severity": 0
    },
    {
      "category": "Violence",
      "severity": 0
    }
  ]
}

# Image Analyze
{
  "categoriesAnalysis": [
    {
      "category": "Hate",
      "severity": 2
    },
    {
      "category": "SelfHarm",
      "severity": 0
    },
    {
      "category": "Sexual",
      "severity": 0
    },
    {
      "category": "Violence",
      "severity": 0

  1. Parameters

4. Using Azure AI Content Safety in Dify

4.1. Prerequisites

  1. Deploy Azure AI Content Safety Container

Before using this plugin, make sure you have an Azure AI Content Safety Container properly set up and running. See Install and run content safety containers with Docker for setup instructions. Please verify that your container is accessible and responding to API requests before configuring this plugin.

  1. Update Dify ENV

When users send images to the chatbox, url that can be used to access the image will be generated in sys.files (each image corresponds to one url). The image moderation tool obtains the image by accessing these url, converts it to base64, and then sends it to the Image Analyze API for review. Therefore, the correct FILES_URL or CONSOLE_API_URL must be set in order to generate a corresponding accessible url. Generally, this should be consistent with the main domain name used to access the Dify Portal.

The structure of sys.files is as follows:

[
  {
    "dify_model_identity": "__dify__file__",
    "id": null,
    "tenant_id": "7720c6b6-73a5-457f-93a2-66075982fe02",
    "type": "image",
    "transfer_method": "local_file",
    "remote_url": "<https://upload.dify.ai/files/xxxxxxxx>",
    "related_id": "4763ef42-1bca-44d1-b12b-bee0e841b719",
    "filename": "image_moderation_1.jpg",
    "extension": ".jpg",
    "mime_type": "image/jpeg",
    "size": 39886,
    "url": "<https://upload.dify.ai/files/xxxxxxxx>"
  }
]

4.2. Steps

  1. Get Azure AI Content Safety Container Tools

Azure AI Content Safety Container can be installed via Plugin MarketplaceGitHub or Local Package File. Please choose the installation method that best suits your needs. If you are installing via Local Package File, please set FORCE_VERIFYING_SIGNATURE=false for the plugin-daemon component.

  1. Authentication

On the Dify navigation page, go to [Tools] > [Azure AI Content Safety Container] > [To Authorize] to fill in the API Endpoint, API Version and optional headers.

For example:

  • API Endpoint: https://xxx.azure-api.net

  • API Version: 2024-05-01

  • Custom Header Key: Ocp-Apim-Subscription-Key

  • Custom Header Value: *******************************

  1. Using the tool

You can use this tool in Chatflow or Workflow. The tool accepts both text and image inputs.

Parameters:

  • Text to Analyze: The text content to analyze.

  • Images to Analyze: The image files to analyze.

  • Text Blocklist Names: Comma-separated list of blocklist names for text analysis.

  • Halt on Blocklist Hit: Whether to stop text analysis if a blocklist item is matched.

Image Requirements:

  • Maximum size: 7,200 x 7,200 pixels

  • Maximum file size: 4 MB

  • Minimum size: 50 x 50 pixels

Text Requirements:

  • Default maximum length: 10K characters (split longer texts as needed)

All parameters are optional. The tool automatically detects when Text or Image inputs are provided (non-empty) and calls the corresponding APIs for content moderation accordingly.

4.3. Output Variables

The tool provides several output variables for use in your workflow:

  • CheckResult: The final decision (ALLOWDENY, or ERROR).

  • Details: A user-friendly, formatted string explaining the violations (only present if CheckResult is DENY).

  • RawResults: A JSON object containing the raw, unmodified responses from the Azure APIs. This is useful for custom parsing or logging.

Example RawResults structure:

{
  "text": {
    "blocklistsMatch": [],
    "categoriesAnalysis": [
      {
        "category": "Hate",
        "severity": 6
      }
    ]
  },
  "image": [
    {
      "categoriesAnalysis": [
        {
          "category": "Sexual",
          "severity": 4
        }
      ]
    }
  ]
}

4.4. Examples

  1. Example 1: Text Moderation – Harmful Category

  1. Example 2: Text Moderation – Using Block List

  1. Example 3: Image Moderation – Harmful Category (Single Image, Multiple Images)

  1. Example 4: Text and Image Moderation with Block List

5. Reference

On this page

    Related articles

    Unlock Agentic AI with Dify. Develop, deploy, and manage autonomous agents, RAG pipelines, and more for teams at any scale, effortlessly.

    © 2025 LangGenius, Inc.

    Build Production-Ready Agentic AI Solutions

    Unlock Agentic AI with Dify. Develop, deploy, and manage autonomous agents, RAG pipelines, and more for teams at any scale, effortlessly.

    © 2025 LangGenius, Inc.

    Build Production-Ready Agentic AI Solutions

    Unlock Agentic AI with Dify. Develop, deploy, and manage autonomous agents, RAG pipelines, and more for teams at any scale, effortlessly.

    © 2025 LangGenius, Inc.

    Build Production-Ready Agentic AI Solutions

    Unlock Agentic AI with Dify. Develop, deploy, and manage autonomous agents, RAG pipelines, and more for teams at any scale, effortlessly.

    © 2025 LangGenius, Inc.

    Build Production-Ready Agentic AI Solutions