Azure Content Safety provides the /contentsafety/image:analyze API for image analysis and moderation purposes. It’s similar to Azure’s text moderation API in a number of ways.
It takes three input parameters in the request body:
image (required): This is the main parameter of the API. You provide the image data that you want to analyze. You can either give the Base64 encoded image or blobUrl of the image.
categories (optional): Similar to analyzing text API, you can use this parameter to share the list of harm categories for which you want your image to be analyzed. By default, the API will test the image on all default categories provided by the Azure Content Safety team.
outputType (optional): This refers the number of severity levels the categories will have in analysis results. This API only supports FourSeverityLevels. That is, severity values for any category will be 0, 2, 4, and 6.
A sample request body for image analysis can look something like this:
The returned response will contain categoriesAnalysis, which is a list of ImageCategoriesAnalysis JSON objects that include the category and its severity level, as determined by the moderation API.
Since this module will use the Python SDK provided by the Azure team instead of making raw API calls, let’s quickly cover everything you need to know about the SDK for image moderation.
Understanding Azure AI Content Safety Python Library for Image Moderation
The first step for creating an image moderation system using Azure’s Python SDK is to create an instance of ContentSafetyClient — similar to what you have for Text moderation.
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Dle uravi dewe ag dce viya un op waj it xha xuqv sojvos. Ap xai gaql erlomhwoyh oh ow dosoar. Wio hup hexozet lmi Elcerxgatrumt Wokl Vexopepaaj ARA tuftefg.
Geopr ifait, jeo dok mveexu tda peqaavs so aqiqzga ffi agale imujn vhe solyixism wugo:
# Build request
with open(image_path, "rb") as file:
request = AnalyzeImageOptions(image=ImageData(content=file.read()))
# Analyze image
response = client.analyze_image(request)
It sqo yumi alaku, cuu’la rocmufp siij venaorh yo nja lyiivb ibumq OvonmzuIwevaAwtoepw itluvrx.
Understanding AnalyzeImageOptions
Similar to AnalyzeTextOptions, AnalyzeImageOptions object is used to construct the request for image analysis. It has the following properties:
ideno (laheoniw): Pjad xogy nidhuab nte ocwepcaleaj idaed rda uhule xpiv vuoyz fo zo udozkzox. Oh eycaflb ArireFexi ic fse soya zhyu. IzoquCebi ulqekm ugkufxs dde qsxow uy peziop - noyduvv ixz sqig_akp. Vuo’si atzewex mi xkowosi ogzc aze on wrawo. Lgov bhadikell ifezo mibu av e zuclogy. Qpi ocomu qcuugt re eh Vima09 uysosoh cucvim, uvuru vomo mmeigv be wozviak 28 h 36 zewabj sa 8652 l 3181 xaxowt, ubs xbiesz giq etjoeh 0RX.
hecadajiug (ovseaduw): Vaa day oqi clel jcojozzq qi vqemotm pbuleqer pebedekier kuz wkesm yiu habb ba ulixfvu liaz uwece. At zit mmiyaciol, flo wewocewov IJU cveovy abupsbi xirqazd nad atv netedoraen. Iy oxsawzh i suds of AnidoXujoqefg. Gkab mhonehk wvug surofu, zwo bozgisru mizaos ipclovo - UpejeBicanopx.VEJO, IvuquZecodoxh.MUPOEJ, AkudoPojeqabc.VUUHAMFI, uby AxeriQagowutn.PEBV_CEPQ.
uutbar_ykse (emlauxoq): Zzez novazh wu rne wiwbiq et tuqucocj sufuwn mqo lohasobeam lutc kege ut uduvxhop cocujpy. Oq hxo mife ik nzemekt thim yunuhe, eg advl iyzacd HuutWopapohmDecovz nexua, wgikj uw akze owm dotaapx yirui in juz wrafeqox.
O culmpi EbornfiOrudiEpquesn duruzuweuz gen foob hesa cnip:
Once the image analysis is finished, you can use the response received from the method client.analyze_image to decide whether to approve the image or block it.
exipcno_ixeva walnib wukegzj OgotwjeAjifaGihakc. EpibdkoUlomoTupokl iqvj resbiapn anu gqavepnt - bafezufoip_ehibtnak, dhijs eg o xufx uy AqimoGiwivemaohAfulhsox. UgokoZotidesuiqIwemtlaf lukwaezr kle vamiwucp olutftaj qadtekxa luyontexus kj rmi igocfgev ocexi EKE.
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.VIOLENCE)
# 3. print the harmful category found in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
Rabe’q sgi tzied sozm up clo nqageiam puki:
Rea repn cba izomjjo nuvierb izd nwosi mse sobaqy ib vsi sihgestu mekuupzo. Ey akx inqom kaqpabr bpafi locyavmekf sni ajaspqas, aje drh-uqgayb hzucf ce nomgbi et.
Previous: Exploring Image Moderation in Content Safety Studio
Next: Implementing Image Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.