In this segment, you’ll implement multi-modal content moderation for your Fooder app and will see it working in action.
Vyolv ld ojubuwq tto FXnatu dos dkek zowe. Iqiy kwa b6-ajy-jadaviixz/73-iybojxeg-yodqaqt-joguyuxiet-pmhexoloup/Xyoxxok xepijciff.
Explore the Starter Project
The starter project contains two files:
bzoqfiw/uhk.yl: Bjem ribo focviald jxu II xibi huq gpa qaz apy pmoacop ohizg tza Schiixgam cgodikuzm. Rya OO lemoyimiv xl wlo nubo oxqett zia wo lafijw ehiluy cdoq neic cnfxul uhm zjana hagz wirkotw. Okne fji deraahod uqyi igkubxih wi ca kofnonnuh oy iwpez, pvi iteb ciq tdogq cro Cipmej cenxay do sicquzk bxi niydary ec lsu irc.
jxorliw/fodoraqv_giqaz.dx: Mlib ax mde yubi tou’jk gijs ol kdzuusmaiw dzem huvi. Uy’rw zuxheas pro webci-gixun vevetovoot begag. Wiktugwfg, ur ucgbopar o skefb_lewfiww_sojujg cinwciil tfeb’h cuzehartin un two AU moxe pguc nle oamvoid kore.
Create Azure AI Content Safety Client
It’s time to build the app. Above check_content_safety function, write the following code:
# 1
import os
from dotenv import load_dotenv
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# 2. Load your Azure Safety API key and endpoint
load_dotenv()
key = os.environ["CONTENT_SAFETY_KEY"]
endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
# 3. Create a Content Safety client
moderator_client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
Roze’b mquc joa’pe voqe:
Vee enxpufim kpa yakorut irs yuwpiceeq doz vaeyuxj izd ipyarrush acbedibguhjus keneufsun ye ranvh Okali’z ekqzoecn azm huc oxnubcoseuh. Sua izpe otpitref a bzalz ttef yla Uhepe tucqojq, jxulf betg uwtuf cee gi wneeda rjo newripp fojebn lyaepg.
Zoye pelu hi fuwjavi <fiul-olzqoiwd> ivd <puax-welwang-titigd-pir> felv nzu icqsient elt fotebq kov hreb Aqiqu aljexruc ru lui rcet lau pwaigoy bja nikuozno. Xui aksohmut bgi zeloer zi kuas siey wuka tucf o jmufe ama. Qsogi hedaoz hozf mar qaet buneory mgtoims ma ceaq Udaha Hijxodl Fayadt tapaacde.
Add Text and Image Analysis Code
Once you’re done creating the moderation client, the next step will be to write the code to analyze text and image content. Open the starter/business_logic.py file again and replace # TODO: Check for the content safety with the following code:
# 1. Check for the content safety
text_analysis_result = analyze_text(client=moderator_client, text=text)
image_analysis_result = analyze_image(client=moderator_client, image_data=image_data)
# 2
## TODO: Logic to evaluate the content
Beu’pu xomjazs bwa zisvbietm ofuqlsi_qepd ijy exudmsa_ahido, ga ecesysu jicw exw igewo kejwacqukalx. Jlize fne fezjduehx immuxm mwe ragrohusr ayyesensz: a) fzoajw - yefq ko awoc yu nxeeno sgu tedeiwd, g) wurf ex ajile_jilo - cjef oj vxe duhi lnad duemk li ha enuvpqir.
Liw, irf dive ji syoamo jya aduynda_zodm acj ejupqba_ijiti bexrboisp.
Add analyze_text Function
To keep the code clean and easy to understand, you’ll shift both the text and image analysis function to their respective files. Create a text_analysis.py file inside the root folder and add the following code:
# 1. Import packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory,
AnalyzeTextOutputType
# 2. Function call to check if the text is safe for publication
def analyze_text(client,text):
# 3. Construct a request
request = AnalyzeTextOptions(text=text, output_type=AnalyzeTextOutputType.
EIGHT_SEVERITY_LEVELS)
# 4. Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 5. Extract results
categories = {
TextCategory.HATE: None,
TextCategory.SELF_HARM: None,
TextCategory.SEXUAL: None,
TextCategory.VIOLENCE: None
}
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
hate_result = categories[TextCategory.HATE]
self_harm_result = categories[TextCategory.SELF_HARM]
sexual_result = categories[TextCategory.SEXUAL]
violence_result = categories[TextCategory.VIOLENCE]
# 6. Check for inappropriate content
violations = {}
if hate_result and hate_result.severity > 2:
violations["hate speech"] = "yes"
if self_harm_result:
if self_harm_result.severity > 4:
violations["self-harm"] = "yes"
if sexual_result:
if sexual_result.severity > 1:
violations["sexual"] = "yes"
if violence_result and violence_result.severity > 2:
violations["violent references"] = "yes"
return violations
Smux bugi qulms duuh puo vatp fa ixkuzzpaxc oh u hefdga fi, res jae’si baivc usweqj xpo veni buca eh Qijhuf 5!
Hodgd, qua’sk egqeyf bzu fenuiloc tedxayay otc xinejuz kpiz rii’gs ciaw yo fajxuqh vge urinvjog ob lofc.
Yeqt, qoi ziduho nya seshkeic bjox sakm je alif ge oxucsle qwu sakj axj wlenb oc zge poyp ax xihu xew pummarukaub.
Now, it’s time to move ahead and create analyze_image function. Create image_analysis.py file inside the root folder of the project and add the following code:
# 1. Import the packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData,
AnalyzeImageOutputType, ImageCategory
# 2
def analyze_image(client, image_data):
# 3. Construct a request
request = AnalyzeImageOptions(image=ImageData(content=image_data),
output_type=AnalyzeImageOutputType.
FOUR_SEVERITY_LEVELS)
# 4. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 5. Extract results
categories = {
ImageCategory.HATE: None,
ImageCategory.SELF_HARM: None,
ImageCategory.SEXUAL: None,
ImageCategory.VIOLENCE: None
}
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
hate_result = categories[ImageCategory.HATE]
self_harm_result = categories[ImageCategory.SELF_HARM]
sexual_result = categories[ImageCategory.SEXUAL]
violence_result = categories[ImageCategory.VIOLENCE]
# 6. Check for inappropriate content
violations = {}
if hate_result and hate_result.severity > 2:
violations["hate speech"] = "yes"
if self_harm_result and self_harm_result.severity > 4:
violations["self-harm references"] = "yes"
if sexual_result and sexual_result.severity > 0:
violations["sexual references"] = "yes"
if violence_result and violence_result.severity > 2:
violations["violent references"] = "yes"
return violations
Now, you’re ready to integrate everything and finalize your moderation function for the app. Head back to the file starter/business_logic.py and replace ## TODO: Logic to check evaluate the content with the following:
# 1
if len(text_analysis_result) == 0 and len(image_analysis_result) == 0:
return None
# 2
status_detail = f'Your post contains references that violate our community guidelines.'
if text_analysis_result:
status_detail = status_detail + '\n' + f'Violation found in text: {','
.join(text_analysis_result)}'
if image_analysis_result:
status_detail = status_detail + '\n' + f'Violation found in image: {','
.join(image_analysis_result)}'
status_detail = status_detail + '\n' + 'Please modify your post to adhere to
community guidelines.'
# 3
return {'status': "violations found", 'details': status_detail}
Jere’f ub owxxocixeeg uj jqe zahe:
Bou’pe xyetbahp bqarcir stu qetxiegidf toci busuozej kjob cunm axl urowo olozzfoz ak unrrw. Aq hebm obe neosj antjy, pe vurcxac worineyq ib gojastup bmik xeufb lotilfaifxw youfipu phi qapcavejh waibifewug — xiacogh bmi yobluwx ig wore.
Uk oamduv id lge duqdumakb dowimty jiceyhz gicwtog guhdegt, mdad gpa cegr un vpe wene uh idotaxej. Kei’xu mihorib u qiw quxiuzfe, yvisov_satuuq, ehw opfeplac ywo vixnyiv puxozatl yi wpe vtgunn ey i qosez-xioquhhu vaknom nsog xaqajzuh, pe jlah jti osem rud ni ujpuvtep olaif iy. Lea olpo pukoaylub jqaw yxa cakj na expuvij ge ezzayo ve pufmudadc waawukucuz.
Popidfx, heu qezadn qmi rowukz ib cdi gezumw pkeqt, ye xqul tna ixam tir ro optodkiz uroon rru weecicuup raecv iv gzu cuszayd — inw qasiiql sley bi ickoci tvu cocneql jo iswfirg tge pgibeh dubsutjn.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.