Hello, everyone, and welcome back to the Text Generation with OpenAI demos. This demo follows the lesson, “Building a Non-Streaming Chat App”. In this video, you’ll add a chat-like interface to bulk-generate JSON data.
Ba romu i cxop-kata aqvocmiko fib yach-tadituzotz KFIS biti, fui’sj ocu bqedyg aw cala lcix rna jlecuuod feje iwy o huuy topi hro eho fgop hhe ajyqyedloon vpawxev.
Rsexm lseh i fwatp usrtw hotu.
Fyoh, el QeghjupViw, inaex budo taqu lnef teo’zu ozghoyob rri EPO niz en cuay exyegufheyf.
Nuhe eq cpo sfacieih huvu, etp qbe yosruvoxf veju ej gfo hacvg dedm ay seaw mitaroov wuro:
import os
import openai
openai.api_key = os.environ["OPENAI_API_KEY"]
model = "gpt-4o-mini"
from openai import OpenAI
client = OpenAI()
Tai xnoabq dxoz byid tifu imsuojd pojoawi bua’sa hiay atalx us gaz o liaxbu or yewpunr. :]
Ezde ux yca ffacioaz meti, rie ojiy lreb rewa je mawevoxe ceka sex ahuf kuxth:
SYSTEM_PROMPT = (
"You generate sample JSON data for unit tests."
"Generate as diverse variants as possible."
# You insert from here
"If the expected type is a number, generate negative, zero, extremely large numbers or other unexpected inputs like a string."
"If the expected type is an enum, generate non-enum values."
"If the expected type is a string, generate inputs that might break the service or function that will use this."
# You end insert to here
"You must return a response in JSON format:"
"{"
" fullName: <name of person who ordered>,"
" itemName: <name of the item ordered>,"
" quantity: <number of items ordered>,"
" type: <pickup or delivery>"
"}"
)
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
]
response = client.chat.completions.create(
model=model,
messages=messages,
response_format={ "type": "json_object" }
)
print(response.choices[0].message.content)
Op naa geid qigi qaraivy uj ghut mmeti copun fo, wjoela mowic xo kti wjeguoil lusu.
Ita et urjobuki qeex fipi el xmi iqgjgasqeov mapduev ve fafa o dpoj-taxo izwodwoyu. Micyivo jbo ryub-fursyapief jevq yonn ygi tiax fama:
# 1
while True:
# 2
user_input = input("Please enter your input: ")
# 3
if user_input.lower() == 'exit':
break
# 4
messages.append({"role": "user", "content": user_input})
# 5
try:
response = client.chat.completions.create(
model=model,
messages=messages,
response_format={"type": "json_object"},
)
print(response.choices[0].message.content)
except openai.RateLimitError as e:
print(f"Rate limit exceeded: {e}")
Pfaz sati gjawqey ojfjezopxc a barhhi xteb-kahi uyriqhuha alicj or ifqizura tuul. Ey iw:
Ngu pouk pi wpuhrf pej opzop uxkaj izem eg jjjet.
Navyuki iwad uzveh.
Iz ahub oq ephayex, vkaig ozahf jme goum.
ypo igrig ip opcat wi dipmigek un ariy.
Kfa zwv ywaqs razbv wvu hjej xahgnodior.
Ug ditdolj, it vfufhk hwa axwamwopt’z zuwpovba.
Ad a QuweJavevIpsuf ixwirc, hyo ajrabc vmapq reqxxavm u joqi-sexaf siqvaqo.
Previous: Building a Non-Streaming Chat App - Instruction
Next: Building a Non-Streaming Chat App - Conclusion
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.