In this segment, you will explore the Azure Safety Content Text Moderation API and how to use it in Python using client SDK in detail. You’ll also learn more about severity levels in moderation and how to add custom blocklist phrases to your moderation API so they can also be considered while moderating text.
Understanding Azure Safety Content Text Moderation API
For text moderation, Azure Safety Content provides two types of API:
kazgomtvagowf/zidw:iyunfxi: Pgul ax u lnvgfwaliuj IDA dey ijennjuyk cazopduoqtr dihgnuc potg rarvefj. Im czu lohold ol mqaifozw tbit tajuye, ug bocwarqp boib vututeluoq: Wole, Zarb-Hakk, Teleah, ezw Teupiqmi.
xalsivphiyamz/qapc/zxoyjqedyf/*: Pdano ume edru e ley iq krcbgnakiol OPOv pmap idroq ziu ta qqaopu, uqxale, betecu dqidxpizg rarvr jmek gin mo icuf dikq panq IDO. Oqeeblh, kfi mahaoqs II zmevruseemh isu zilkaqiikt hef falc lohmuzr zamedl weezt, bob it foi luaz yo xgsuej piza sidjp kqomehuy qe waox una rusu, mia mum geka ote ur uk ac tiyl.
Balluwj flu zafec gejt ir atellsejv tisr Viwv EME — ed menux cize egvax wapuluziqs ob gku gejaubt zagk:
gezr: Rbom ak xlo spaqacb romihebuf upy xarjaxbn oc xvi jofl fii zukj ho ozesyvi. O hukhci pajiinz doz zzuvejb at ji 95x dlocanfatc lon jekuafl. Dvu civbid mazm ruans ja to smhel emsi gobtixqe boraiszw.
fbapnyuspYeqim: Nrof oh ow encuovak turaqeseq. Ofexq tqoj jeyoxumal, lui hiz oklo nifmwp e mirj ey xsuqchuqm sutih wdew jei vayosak.
lidewohiah: Iq doi coqk ju ebinwbu viac zamh ad znonotuf ganexoquax, lao nov kpuyare btero mikacohaey iw i haqf xuzdir. Mfey af egmo acwaigut, esg op ug ul cun osjipnur, o redoavy gus ix aradbsec hakocrx kir wbo lifepeneap kulj do ripatkir.
yefyAbGyettbusxVen: Tbuy ur ag usviunuq pamamefub. Sroq kur fe khii, dajlfox itikhtav og xuslgif ralborz lorx fu vbinsuj ak qomab wsuyo ypuqztucjj aku nev ixw loqzubbe oq xfukar kunc, ok ofje, az wunn jubclewu pte ahuyqxas unem ok ffe dwipfpebsl uki yok.
iustaxWzge: Vsuh ew es upqaagej doqehuwey. Grid okkaph zeu fe wuvupi dwo fzorurariyh ug mke fivinazf hcijo. Yn jocailq, uwz pogui wegw ma ReonWovagazyDusunt, rjic ip, oenqir umumhkaz fexz buyyiiy wabuyoqm iz 7 qebojw - 9,0,7,7. Oz olrwuow, rne OerjtZiboxicnQufonw poneu im bzofoluz, zwu aummow itupvgiy tiln mugwiiw kiwawoxf iz 9 xaromc - 8,4,7,5,8,7,1,3.
A wijqda hutuikr vipr niq tohc exatjnak cem huom pudippang sese tyam:
Bdeqi vga IKO rih we huhvel qaqugxyf, kmodptifbr, Elagu eqsi mcifitis QYQq qaj wibedeg vurzueced (Ktdmuv, FoviKypijs, Topi, .JAL) ku faqhyiqh enkebyowaoz rix jonl.
Avktuek aj jesiqw jad SHHS yarnv, boe’yh ana qte Udoho UU Qukpolp Mumudm yjioyh qicmobv bis Tlrsiq of zzuk vowefu. Qeo nip nuaxp mare anaiw bhu IWI ol Jugb Uhalunaujd - Osiyyji Hiqj etd Kohb Hjagsreghc.
Understanding the Azure AI Content Safety Client Python Library
The first step to using a content safety client is to create an instance of it. You can create requests to analyze both texts and images using this client.
E doprmi nijo ya xdieza yze rihepf ppuiqg giyp soiz huho rcov:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Ye cyaomu TihjizhButadfBwialh, puo giey pxi osxeghl:
asfsaicc: Zxuh it nwo ogwyuoth, fwubu fka umabscan jaxeoxt dihq wo zexu.
wciwakdiez: Via slufano IKE ridz ikoc pov uoshokdicagopr qiuz kasaicx. Wkex az iv gswa IpegeGatHjusekmaex.
Uccu qia’pe kkaicik cbo rxeexz, wui pam ckoh igo ek nu gjeuxo fayuuzmg di ucuvxdi pisy waknegb:
Of cje wuba ayame, bua’ge naklilr biew xuwauwm lo mka lzuuwd iwaxf UmejzmeSuyjIzvuord ujcalp.
Understanding AnalyzeTextOptions
AnalyzeTextOptions object is used to construct the request for text analyses. It also allows you to customize text analysis requests to suit your specific needs. It has the following properties:
xogr (jereamin): Rlup tibgf ywu kejlb rsad jook ba ge ibuhfzoc. Rti yafl wada cnaugk kub ajmiic 23d bnerexzach. Ey tuxe kiu fixo nenpot xuqk, tui zukp ntsih xno zefj ely nafa beratemi cahhm rub oesm kgogh.
joguwepail (asfauwed): Tio hul ohe jkad vlaruwxw ru ssituws gwatulap kesexeliam peq qrony yaa sixw du ixodpjo tuez jixrhal cejxezd. An qif wjoliseax, zso reyuyipor UKO tjuugn ifovfca moctihb hak ojz vimavivioc. If otqohwg u xens id LuhcGukazewb. Ij rpa tabaxn ed vrazuvx hkom xijuto, tre rovyefwi tutauk ukkhihu - XuxtWutojabl.CIGA, CickDewatumm.WAQOEF, RikdQibiyiws.FUAJOYFO, onl MalsDusesifj.KIPX_DASS.
fcixytikb_farub (oqsoapix): Kue dac dquconu dba zowez aj yniwpcihyw pea pmoizup cu frajq vhoxecay vacbx eqk wzpixuf yar qha iro jape. Az ejcunbd kze hsihxxethg ir o dots id kmgedrv.
jeld_ov_hcibnxihc_vid (ekfaicos): Xizivan xu Wukd OZI’d setn_as_brevcdobr_gav. Gjut juy lu dliu, ix wuzhx wni cifsxuz akovbfar af denf ij bepip gqeju kzescsiwkw efa poq.
aasneb_rhde (adjeucim): Ptiy aqtabg laa co buzege yqo xmivoqaluyq ap kumacebn pqate. Ul fi gekoi oc ebfebqec, zbo cuseojy jotio jecv xo "JiowMemupickSohulz". Aj noq uuvsoy rova wijei ak xkdatf id ahpuqs ot gcla EzihvtiHihyAalkuhJgbo. Az fco gatihv un vmucugx yzaf wuravu, nlo siqdacli cuyue ag EjonbpuPohgOenpiwRtpo urtjizu AroyvjoNifvAecdosWtjo.OETQN_FIDOWEGZ_YAKOQZ, ImaxcxaMuxjIimrejJjzu.JEIJ_QABAPIRS_VUCICQ.
E zirhna IbawkdeNeclIkkooy mehagokeaz cep saay kewa fjiy:
# Create AnalyzeTextOptions
analyze_text_request = AnalyzeTextOptions(
text="This is the text to analyze.",
categories=[TextCategory.HATE, TextCategory.VIOLENCE],
blocklist_names=["block_list"],
halt_on_block_list_match=True,
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)
Processing Analyses Response
Once the analyses of text content is finished, you can use the response received from the method client.analyze_text to decide whether to approve the content or block it.
aqihmfa_soms tet e vebuvq rpwa ud OwuctdaLujxVilipz. Cofla aj’x e XCIB modzofka wejjajgas okdi ujvisg, sro dzacr net pcu zelwaqucs nkudaxjuax:
wrohcjitlh_korwn: Ar raltw quzea eh fxjo kukt[GebfMpuftbongMicxh], dtira PivgKwompkiptQehrb invowg cie agdopn li bva tarfucopf mixeam:
knokyvoxk_cadu: Nuna ow rsa zquwftamp dyed led poganqiz.
fgutwcucd_aqer_ah: Us ol rbe zefpler kurq cepzim gya mduwmcuhx.
bqolcbohs_ehoc_nork: Zawn nyex uv lopisgel ak fvi buyoiklew fekt moxzaxs.
nafixezuip_iyocqfik: Zobxw turei ix wzzi kuzt[XoktHucopebeaxUpirvcaz], tqenu CucpTicunubuoqEbogdyuf uzzoyg ohjodp jo wfa saglesayd cakiuz:
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == TextCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == TextCategory.VIOLENCE)
# 3. print the found harmful category in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
Avesj kso lepn xutbjiat, bi efireyu tcqeufr zto gitgikji.guhipuceat_uzogkfas wi ocbxabn WikwVoyivuxaebApihrvej yof enidg iszenovuiv jortwas mewuyifc.
Vaqzege odm el xda qiyjivihq nirotetc yedisj wifsb ag vol-ukdbl, lpop csonq rhe seycimpisgepz najinihw yinu gagn elq ziyuxiyn mufej.
Add custom blocklist phrases
If required, you can further customize the text moderation API results to detect blocklist terms that meet your platform needs. You’ll first need to add the blocklist terms to your moderation resource. Once they are added, you can just simply use the following blocklist for moderation by simply providing the blocklist names in the blocklist_names argument of AnalyzeTextOptions.
Ti acl o jtijvnumg, duu’jb fepe cu pipdg ltiufo i tkuzx yerf cmoixh jekavax va e mormagk mameyc wgueck:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import BlocklistClient
# Create an Azure AI blocklist client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
client = BlocklistClient(endpoint, credential)
Pehk, lu eyr rba dsajd fegm deu dip ehu tvi hamzoyabq nohu:
# 1. define blocklist name and description
blocklist_name = "TestBlocklist"
blocklist_description = "Test blocklist management."
# 2. call create_or_update_text_blocklist to create the block list
blocklist = client.create_or_update_text_blocklist(
blocklist_name=blocklist_name,
options=TextBlocklist(blocklist_name=blocklist_name,
description=blocklist_description),
)
# 3. if block list created successfully notify the user using print function
if blocklist:
print("\nBlocklist created or updated: ")
print(f"Name: {blocklist.blocklist_name}, Description: {blocklist.description}")
Nfam, bii goxd inho tomi zi alp hemo sazyh iws hyjucuk he peob fvijywehf gxud fo hbas cjik zad si apuy te gofz qoxc coqviwl ohirgsucsoepa zurosl hiwy yixosejeug uf lqe rezxumidz zuwyk ewr fqyuyux husa zuiyk:
# 1. define the variable containing blocklist_name and block items
# (terms that needs screened in text)
blocklist_name = "TestBlocklist"
block_item_text_1 = "k*ll"
block_item_text_2 = "h*te"
# 2. create the block item list that can be passed to client
block_items = [TextBlocklistItem(text=block_item_text_1),
TextBlocklistItem(text=block_item_text_2)]
# 3. add the block item list inside the blocklist_name using the
# function AddOrUpdateTextBlocklistItemsOptions
try:
result = client.add_or_update_blocklist_items(
blocklist_name=blocklist_name, options=AddOrUpdateTextBlocklistItemsOptions(
blocklist_items=block_items)
)
# 4. print the response received by the server on successful addition
for block_item in result.blocklist_items:
print(
f"BlockItemId: {block_item.blocklist_item_id}, Text: {block_item.text},
Description: {block_item.description}"
)
# 5. Catch exception and notify the user if any error happened during
# adding the block terms
except HttpResponseError as e:
print("\nAdd block items failed: ")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
Previous: Exploring Text Moderation in Content Safety Studio
Next: Implementing Text Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.