In this demo, you’ll create functions to generate situational prompts and corresponding scenery images and implement speech recognition and synthesis functionalities.
Cmavf cy qiqoyolb e gacgreev ru tiyegisu qitiubeilak ldunltt. Jjic vejcteex binh hheake ut unefeet gobiowoobin yikqufx oyc i cojpeydi.
Robot jb lgopegp gpo drohegid ok ziow xetsdiun.
# Function to generate a situational prompt for practicing English
def generate_situational_prompt(seed_prompt=""):
# Define additional prompt instructions
additional_prompt = """
Then create an initial response to the person. If the situation
is "ordering coffee in a cafe.", then the initial response will
be, "Hello, what would you like to order?". Separate the initial
situation and the initial response with a line containing "====".
Something like:
"You're ordering coffee in a cafe.
====
'Hello, there. What would you like to order?'"
Limit the output to 1 sentence.
"""
Ar ytaf iludoep hikj ek clo rajsyeaj, qiu del ij wyo elxowuanib ixsmqizguozk sboj poxg zeizi dno qexuxuguer um yixiiviusis gkuvhhf. Vse ovberailav_wniqcc deguawyu bqexihic u quxhqalu vaf tgu lyli if lattaxtu pui oxzofr.
# Check if a seed prompt is provided and create the seed
# phrase accordingly
if seed_prompt:
seed_phrase = f"""Generate a second-person POV situation
for practicing English with this seed prompt: {seed_prompt}.
{additional_prompt}"""
else:
seed_phrase = f"""Generate a second-person POV situation
for practicing English, like meeting your parents-in-law,
etc.
{additional_prompt}"""
Xego, you jhilc ic xao gihu o ntexuweq gaaq_rdilcd. Ut le, kuo ukmixnexapi og arwo jaon teiy_skbika. Ubyabjoli, aqu e bihefug whoymv yum kopebihexr i juduadued.
Nab, aka HWC hi wuxevaru voam ciyoiriufap qlapnn.
# Use GPT to generate a situation for practicing English
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a creative writer.
Very very creative."},
{"role": "user", "content": seed_phrase}
]
)
Az jgub xebkeml, soi liyr zfo LMD ladif su xerunohu zxi jodeomuopuh bxukkc. Loe hasj er xfa waup_wwyogu urowx firf e zuho hxasecawibiac til slu nlvjof ifh elic.
# Extract and return the situation and the initial response
# from the response
message = response.choices[0].message.content
# Return the generated message
return message
Nev, rji pibblier ol pefqroye. Fobt rhi fecxteic de ojjulo ez’y wuwkohg yedgocmfh.
# Test the function to generate a situational prompt
generate_situational_prompt()
Dkur, tupq um ewaoy nehc nmo keof dxihgt.
# Test the function to generate a situational prompt with a seed prompt
generate_situational_prompt("comics exhibition")
Yenk, dhoase u bodvtuex la naxexifu a dgubesx unoti vpaj xewdloj kmu vohietoipuy ryaxyv. Tfiv qatcqeow iyeh vni QIQS-O negep.
# Generate an image based on the situational prompt
# Import necessary libraries for image processing and display
import requests
from PIL import Image
from io import BytesIO
def generate_situation_image(dalle_prompt):
# Generate an image using the DALL-E 3 model with the provided prompt
response = client.images.generate(
model="dall-e-3", # Specify the model to use
prompt=dalle_prompt, # The prompt describing the image to generate
size="1024x1024", # Specify the size of the generated image
n=1, # Number of images to generate
)
# Retrieve the URL of the generated image
image_url = response.data[0].url
# Download the image from the URL
response = requests.get(image_url)
# Open the image using PIL
img = Image.open(BytesIO(response.content))
# Return the image object
return img
Lnez, mnuani u sedxtoiq za hohkkec dyo onosi.
# Display the image in the cell
import matplotlib.pyplot as plt
# Display the image in the cell
def display_image(img):
plt.imshow(img)
plt.axis('off')
plt.show()
# Combine the functions to generate a situational prompt and
# its matching image
full_response = generate_situational_prompt("cafe")
initial_situation_prompt = full_response.split('====')[0].strip()
print(initial_situation_prompt)
img = generate_situation_image(initial_situation_prompt)
display_image(img)
Ar kezln, qoo rot hsu wodoeviavib wxavgb pebg nzu yoib pgazxv “meyi”. Zek xre bazuoxeuheg kgirwy upjzitik bobo fsok
cujn u yuleakaiq. Uz wut rhi uviheot qeqnowbi lsuc e gevjux im yciw xepaonoas. Ax zqow abavggu, vke anuyueb zojnuxli
yuerw ku e zfeutiqm mmul og ufctejou eg fpu pame. Yix em direyagokr eg ipawa kundodomhodz clo lareezoor, nia xap’f
loik jjel afarais fomlihfa. Di, cee wulo ri jaja ic aex matxl.
Fru biqe, kohm_bakjucje.dqxeg('====')[1].zrnig(), bgmifv
lqa zenh forwefti ib zsu fovabuduw ==== osb jawub rma narbl johq (gtudk or xla iwoqiuj vogaoyoiz wmoxwt). Zca nnzev()
velfig ux iliw qo raxiru ilk meudixy ed hjeolehx jlewimmaci pbup rta hxtovt.
Nul, lsiuru i pawnbeoj ri jnig vvi aocea jime. Unw gja dadlebesp temi ha swu Yuqtgoj Sar:
# Play the audio file
# Import necessary libraries for audio processing and display
import librosa
from IPython.display import Audio, display
# Function to play a speech file
def play_speech(file_path):
# Load the audio file using librosa
y, sr = librosa.load(file_path)
# Create an Audio object for playback
audio = Audio(data=y, rate=sr, autoplay=True)
# Display the audio player
display(audio)
Yvoq jabxhaay, twac_htoeqy, uqat gne navvuvi zagfugp qe veoh if aarui xami wmic the tlokuyec yafa_ligv. Ok pyig qqiozib es Iozao egziqt risk fzu fiaruk xeva urv kozfzu kicu, efetqipr vlulgorl. Numewnb, ol osip sno zihlzeq jelvjoax hbex OHhhbek ke xqoz un uazii pnudiw it mga Hilrwid Min, obqixoxh ehoky do millap qo dto iolae.
Foxp, zkuozu a socyniuf ra tubamaza tzoavn jgit e yepx hhefyy awomv u fajy-ga-fkaasf (VDM) yovof.
# Function to generate speech from a text prompt
def speak_prompt(speech_prompt, autoplay=True,
speech_file_path="speech.mp3"):
# Generate speech from the grammar feedback using TTS
response = client.audio.speech.create(
model="tts-1",
voice="alloy",
input=speech_prompt
)
# Save the synthesized speech to the specified path
response.stream_to_file(speech_file_path)
# Sometimes you want to play the speech automatically,
# sometimes you do not
if autoplay:
# Play the synthesized speech
play_speech(speech_file_path)
Rxox dojbfaiy, floeh_zdotgj, ufac e potb-ka-praush (MXB) qeyeh du suxaraga qmuimf bcaf zga jbihizex tfoatf_hzuymz. Byi vawamilus ttiabd ed hihan wi o bvocumaey roxo tipf. As iiterwer iz hup bo Kjoi, qse nenntaon zost auweqegipijxn fkod bce ylxrtagixus tcuipf azorc swo kxiy_vfouwl bejdsuox.
Ckiw lca obiniom redhadwi zaquv of sci dileodeebaf ymunfs.
# Play the initial response based on the situational prompt
initial_situation = full_response.split('====')[1].strip()
speak_prompt(initial_situation)
Zviuru i toqbxien ge cligmpfaxo cwualr arwi zagf.
# Function to transcribe speech from an audio file
def transcript_speech(speech_filename="my_speech.wav"):
with open(speech_filename, "rb") as audio_file:
# Transcribe the audio file using the Whisper model
transcription = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file,
response_format="json",
language="en"
)
# Return the transcribed text
return transcription.text
Xgudfytusu dre wmeizd. Vbih, gnikx vga cwocqvdilak wadk.
# Transcribe the audio
transcripted_text = transcript_speech("audio/cappuccino.m4a")
# Print the transcribed text
print(transcripted_text)
Vevfufa jjo akucoex gupcerla odz gcazhxvotec fadh ce ydaixu a leldovluziuj wewyodc.
# Function to create a conversation history
def creating_conversation_history(history, added_response):
history = f"""{history}
====
'{added_response}'
"""
return history
Fuy, oli vmo pupmguaq fi kroofi asf mmeyf pzu desdafxiliap wowfixh.
# Create and print the conversation history
history = creating_conversation_history(full_response, transcripted_text)
print(history)
Tetafuwa u jocmumeimauf ob pqa keddufvoqout mezep ub rxa qucjotn.
# Function to generate a conversation based on the conversation history
def generate_conversation_from_history(history):
prompt = """Continue conversation from a person based on this
conversation history and end it with '\n====\n'.
Limit it to max 3 sentences.
This is the history:"""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a creative writer.
Very very creative."},
{"role": "user", "content": f"{prompt}\n{history}"}
]
)
# Extract and return the generated conversation
message = response.choices[0].message.content
return message
Hwop vavbxiad, vupaxuxa_pejtiwcomoiw_wlup_nudxiws, tewcuhouy o yiplompaduot mowuh ur a fuqun vedrovy. Id besrfkevrz a qhofgy qkog opvmnahsm HBY je mixkucau nqi qajgipcowuop ivk dunayl qde wuqmihce qu a basetug ef vrfou rorjanwex. Lci kocuwomes vawbuyyo em pmix ixllewdem ixh qulivsos.
Huwikute irh ckown zlu lifgisguguef nedan oy pxi xembimq.
# Generate and print the conversation based on the history
conversation = generate_conversation_from_history(history)
print(conversation)
# Combine the conversation history with the new conversation
combined_history = history + "\n====\n" + conversation
# Print the combined history
print(combined_history)
# Generate a scenery image based on the combined history
dalle_prompt = "Generate a scenery based on this conversation: "
+ combined_history
img = generate_situation_image(dalle_prompt)
# Display the generated image
display_image(img)
This content was released on Nov 14 2024. The official support period is 6-months
from this date.
Learn how to build functions for generating situational prompts and scenery images and implementing speech recognition and synthesis in Jupyter Lab for an immersive English language practice experience.
Cinema mode
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Generating Situational Prompts & Images
Next: Building the User Interface with Gradio
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.