Modernization of Images Using Content Studio

In this post, you'll learn how to create and analyse picture moderation using various datasets. This experience provides a fast demonstration of how to analyse picture moderation and regulate image content in real-time.

Creating Content Safety Resource

(1) In the Azure Portal, search "Content Safety" in the search bar and select "Content Safety" from the search results.

(2) Click on the Create button for Content Safety.

(3) Fill-in the details as shown in the image below. choose a subscription and then create a Resource Group named rgAzureContentSafety, and type the resource name as contentsafety-imagemordenization.

 

 

(4) Click the Review+Create button and finally Create button.

 

Get Started with Content Safety Studio

 

(1) In the Azure Portal, go to the recently provisioned content safety service and select Content Safety Studio which navigates you to the Content Safety Studio. 

 

 

(3) Click Try it out for Moderate Image Content.

 

Safe Content Scenario

(1) Click Run a Simple Test, then select the Safe Content option from the moderate picture content.

 

 

(2) Here we have an image of two children holding hands and smiling at sunset. Clicking the run test button will determine whether the status of the question is allowed or blocked in the Judgement option.

 

(3) You can notice prompt content has been allowed in the View results area, and the detection results for the category and risk level are shown below.

 

Self-harm Content Scenario

(1) Click Run a Simple Test and choose the Self-harm content option from the moderate image content

 

 

(2)  Here we have an image  of a girl holding a gun with the intention of self-harm.  Clicking the run test button will determine whether the status of the question is allowed or blocked in the Judgement option.

 

(3) Prompt content has been blocked in the View results area, and the detection results for the category and risk level are shown below.

 

 

Run a bulk test Scenario

(1) Click Run a Bulk Test, then select the dataset with a safe content choice from the moderate image content list.

 

 

(2) In this context, I have taken a dataset with 15 records, and the corresponding label 0 indicates its safe content. Click the run test button, and it's going to be determined whether the status of the prompt is allowed or blocked in the Judgement option.

 

(3)  Prompt content is now allowed in the View results area, and the category and risk level detection results are shown below.

 

 

In this tutorial, we successfully learned and applied Content Safety Studio. We investigated the many material Safety Studio capabilities, such as moderating image content using various real-time situations, including safe content, self-harm content, conducting a simple test, and executing bulk test scenarios.