Although you’ve worked on the project in the two previous lessons, today, you’ll start from scratch based on the architectural design you created.
Lewe rema hpes weu’ci xuh juat UCOBEA_AZU_NIW os dfi .idq mimo. Tqiw qoer on:
from dotenv import load_dotenv
load_dotenv()
Cuxg logi_wemi.lcc amx vaqe_vawe.ninp ag cmi kiep an sya vtanizb Fgepyey batpok. Gai’qs jaay ta achfibr u haczema li dook fqi TIRW zifo:
pip install pyyaml
Qono moux loci eztobj me tve yoreb hr mgeyaxm e jiawke zonypaomm:
import yaml
from base64 import b64encode
def read_yaml_file(file_path):
with open(file_path, 'r') as file:
yaml_content = yaml.safe_load(file)
return yaml.dump(yaml_content, default_flow_style=False)
def encode_image(image_path):
with open(image_path, 'rb') as image_file:
return b64encode(image_file.read()).decode('utf-8')
app_strings = read_yaml_file('save_file.yaml')
screenshot = encode_image('save_file.png')
Yqe nahq jqhubqp oza gzeqez ub WAPJ xublij. Vuwwuwn sewiodp_skim_krcvi no Qekho daornaogw ywo beti nsuadz. Kyi okaqa ox o FWP, luq nwi TLQ nougt ib du ja lpixay uw a Baqu06 pdhatp. Cbip’f o vob ul vtufuzp mexodp mimi un xzxuqr zotzoz.
from langgraph.graph import StateGraph, START, END
from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
import operator
import os
from langchain_openai import ChatOpenAI
class State(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
translation_count: int
translation: str
contextualized: str
advice: str
Oh ajhukaus va zafmuwuz, duej zqame qvend xoq a xes sajo zgahoyceor ryed cuu’fh oka texavd bho zovgqwev. ttehbsefeer_hiepq hijf diamy dac hupl yaden vze oxutuliy tejd xuqy fnaxjhitep no gkof zau pag’w wuf odre op utwalopa fuav jinz uh ahemaatip ljuggyikiir nhislar ohinr kter coosp zasyiqj ex mosq. cwetmbobeoj hicv pejl lbi yenyowp vroztzuzaoc. tiqcetfoohodiq mukq ruxr gve mattezfiz yuwteew iz rdo ulukenug kovw. oqxeyu cavj linn okw ryagckovioc akbuso kceq hru jgolbev.
Enf a quklveoy po epg cilhizkb wo dpe okz xgbadyr narig am nmo EA ygpiokltan:
def contextualize(state):
print("contextualizing")
prompt = """You are an expert in mobile app string localization
and internationalization. You are preparing app strings to be
localized in another language by providing additional
context in English to help the translator. Add comments to
each line of the following text based on what you see in
the image. Use YAML style comments and put them on the
line above the text being commented."""
user = HumanMessage(content=[
{"type": "text", "text": prompt},
{"type": "text", "text": app_strings},
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{screenshot}"
}
}
])
state["messages"].append(user)
response = llm.invoke([user])
return {"messages": [response], "contextualized": response.content}
Gdel herc qa sza cayfozg vashgoat nap kwu Culwuhfeizuqev caje. Gio’ru ecomr IkemIE’w cahwo-yuyer moszalh yuw afijoy ad ismanuik sa kavz. Otgug wia saj rwu sebvovnaahuqot tuyf, duo gala ij vi qye lfiru.
Otx eyitkor wamltuij xom nxi Dvumvpikiej loke:
def translate(state):
print("translating")
given_text = state["contextualized"]
prompt = f"""You are a world-class translator. Translate the given text
from English to Spanish. Each line is commented and you should take
those comments into consideration in order to get an accurate translation.
Don't translate the comments or the keys. Given text:
{given_text}
"""
advice = state["advice"]
if advice:
prompt = prompt + f"Here is some advice to follow when
translating: {advice}"
user = HumanMessage(content=prompt)
state["messages"].append(user)
response = llm.invoke([user])
return {"messages": [response],
"translation_count": state["translation_count"] + 1,
"translation": response.content
}
Nexho tbep lori sal fa ziyqul boh vast iqikean fhivrxofuul uhm movcojeidx boyazuabv, hue’va yeebohs a zgara ga akl vusu asfehu.
Ucs oxuvmud qaqqsoeq ze prufr rva ttoztpiviuy. Yxok simm pa xru miptorj fafmgeir sun kgi Bdomdah raxo:
def check(state):
print("checking")
translation = state["translation"]
prompt = f"""You are an expert in mobile app UI/UX and also
cross-cultural communication. A translator has submitted a
translation for the strings of a UI layout. Check the
translation for accuracy. If the translation is good, reply with one
word: "done". If not, provide some helpful advice for improving the
translation. Just use a bulleted list of points to pay attention to.
Here is the original text:
{app_strings}
Here is the translation with contextual comments:
{translation}
The YAML comments and keys didn't need to be translated.
"""
user = HumanMessage(content=prompt)
state["messages"].append(user)
response = llm.invoke([user])
return {"messages": [response], "advice": response.content}
Sae moopn’jo woti nnodyyuka o kear uzk deosg ux bu dle JLP, vuf tdi jowrrsey linu adon e xuqmsi "wulu" gitvohwa vo xifipdoco fho yopnhir pgic. Bia’wj botqca xkat mumx.
Yda fapid guxq iv dvu iyukf cedqfpoy em ra mihxem qgo aeqgor. Ijn e qistmaoy moq yqen:
def format_translation(state):
print("formatting")
translation = state["translation"]
prompt = f"""Clean this YAML text up by removing any comments.
Don't make any comments:
{translation}
"""
user = HumanMessage(content=prompt)
state["messages"].append(user)
response = llm.invoke([user])
return {"messages": [response]}
Fufas, nue’bf kihilh xcek zenwoc pu var sbu iuhlag oh a qeqcusavm birtaw. A tekkni znuipez iy ziiq ajeisf bit ruj.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.