Microsoft has launched a new service called Azure AI Content Safety, which aims to help businesses and organizations monitor and moderate text and image content for safety.
The service uses advanced AI models to analyze multilingual text and images and detect offensive or inappropriate content such as hate speech, violence, sexual content, and self-harm.
The service also assigns severity scores to the detected content, allowing users to prioritize and take action based on their own policies and preferences.
Azure AI Content Safety is designed to support both user-generated and AI-generated content, as well as content from open-source models and other companies.
The service can be used in various scenarios, such as online marketplaces, gaming platforms, social messaging apps, media companies, and education solutions.
For example, the South Australian Department of Education used the service to create a chatbot called EdChat, which filters out harmful requests and responses and provides a safe and engaging learning experience for students and teachers.
Also read:
Best Practices for ADF Pipelines (Azure Data Factory)
Implementing FinOps in Azure: Optimizing Cloud Costs for Maximum Efficiency
Maximizing Security in Microsoft Azure: Tips for IT Administrators
The service is part of Microsoft’s commitment to responsible AI practices, which include ensuring that AI models are used for their intended purposes and do not cause harm or bias.
Microsoft also provides guidance and resources for users to implement their own responsible AI principles and practices. The service is available as a pay-as-you-go consumption model with flexible pricing.
Azure AI Content Safety is now generally available in preview mode. Users can try it out for free with an Azure account or use the interactive Content Safety Studio to explore and test the service’s features and capabilities.
The launch of Azure AI Content Safety comes at a time when online platforms are facing increasing challenges and scrutiny over the safety and quality of their content. According to an article by Microsoft Research, 64% of online adults have experienced some form of online abuse or harassment.
Moreover, the proliferation of AI-generated content poses new risks of misinformation, manipulation, and deception. Therefore, having effective tools to detect and address harmful content is crucial for building trust and confidence among users and stakeholders.
Azure AI Content Safety leverages state-of-the-art AI models that are trained on large-scale datasets of text and images from various sources and domains.
The models can handle complex and nuanced scenarios of harmful content, such as sarcasm, euphemism, slang, idioms, metaphors, and cultural references. The models can also adapt to different languages, dialects, and contexts. The service supports over 100 languages for text analysis and over 200 countries for image analysis.
The service also empowers users to customize their own content moderation policies and workflows according to their specific needs and preferences.
Users can choose which categories of harmful content they want to detect (hate, sexual, self-harm, violence), as well as the threshold of severity scores they want to apply (low, medium, high). Users can also create their own blocklists of words or phrases that they want to flag or filter out.
Additionally, users can integrate the service with other Azure AI services, such as Text Analytics, Computer Vision, Translator, Speech Services, etc., to enhance their content moderation capabilities.
Credits: With information from Microsft