Tomaz Kastrun continues a series on Azure AI, this time focusing on content safety functionality. First up is an overview of the product:
Content safety Azure AI service detects harmful user-generated and AI-generated content in applications and services. It includes text and image APIs that allow you to detect harmful or inappropriate material. This service is , as all other services, easy to integrate to your app.
After that is how to access items via the SDK:
The Python SDK contains several functions to analyze text, images, and manage blocklists in text moderation. With the SDK you can cover the following scenarios:
- Text moderation: Detect hate speech, sexual, selfharm, violence content in text.
- Image moderation: Detect hate speech, sexual, selfharm, violence content in images.
Coming back to example we covered yesterday – moderating the text content – we can alternate the filtering to suit your needs.
Click through to see how it works.