AI, LLM & Prompt Engineering
Artificial Intelligence (AI)
Artificial Intelligence (AI) is a set of technologies that enable computers to perform tasks typically associated with human intelligence. These include but are not limited to tasks like learning, reasoning, problem solving, communicating in natural language, analyzing data, and much more. AI applications are usually designed to assist people in accomplishing tasks, some of which may be mundane, repetitive, expensive, or so huge and complex that they would require multiple human experts a significant amount of time to do.
One attribute of AI that is most widely used is its ability to communicate in natural language. That’s the real deal for most people. This attribute, coupled with others like context retention, is the reason why you’re able to have conversations with ChatGPT. The underlying technology here is the use of Large Language Models (LLMs).
Large Language Model (LLM)
A model is a computer program built on a set of data, designed to recognize certain patterns in order to produce relevant results given an input. The process of teaching a model based on a particular set of data is called training. An LLM is therefore a kind of model designed to understand and generate responses in human-readable language. Some examples of LLMs are Mistral, Qwen, Llama, Gemini, DeepSeek, GPT(the backbone of ChatGPT), Claude Sonnet, and others.
Other kinds of models include Vision-Language Models (VLMs), which are designed to understand media data, like pictures, audio, videos and graphs, and Large Action Models (LAMs), which are designed to execute actions based on given inputs, among others.
One of the biggest issues with LLMs is something called hallucination. Hallucination is a phenomenon where the LLM responds with inaccuracies. These inaccuracies can vary from mild errors to extremely false information, yet the LLM seems confident in its response. This is one primary reason many people don’t like to use AI—it can be misleading. It’s a good enough reason to be careful when using AI, since you can’t always guarantee its correctness. For you, what’s even better is to understand how it works and know how to make the most of it.
That said, a number of techniques and frameworks have been developed to address such issues, one of which is prompt engineering.
Prompt Engineering
Prompt engineering is the art and science of designing and refining prompts to get the best possible output from generative models. A prompt is an input or instruction you give to an AI model. This may be in the form of text or any other media format the model accepts. You may have realized this if you’ve ever used any AI chat tool—that when you’re not specific with your prompts, you could get general and sometimes irrelevant responses.
Generating responses is an expensive process. It takes significant computing resources to generate even the simplest response. That is why most of these applications have limits on their usage. These message exchanges between you and the AI application are measured in tokens. A token is the basic unit of data processed by a model. The more tokens you send, and the more tokens it responds with, the more costly it is for the AI application.
With prompt engineering, you get to maximize these tokens. You send fewer prompts and get highly relevant outputs with fewer tokens used. This isn’t just great in terms of usage, but it also helps prevent models from producing irrelevant and inaccurate data and retaining the same in memory for subsequent interactions. It helps reduce hallucinations too. You become more productive and effective because you’re now able to harness much of the AI’s power to achieve your goals in a short time.
A simple prompt for a math word problem for a class 6 student could be:
"Create a math word problem for a class 6 student."
This will most likely result in a generic word problem, so you’d follow it up with other details like the topic, question format, and more. With prompt engineering, you could end up with something like:
"Create a simple, short story-based word problem about fractions for a class 6 student. Use names they can relate to, include one picture idea, and provide the answer at the end."
Here, you give it more information upfront in two simple sentences. This is just one example, though. You can use prompt engineering to perform text summarization, question answering, conversation, reasoning, and a whole lot more. Prompt engineering is also an art, because you rely on the way models work, their outputs, and multiple rounds of testing to find that sweet spot that fits your scenario. There’s no single best prompt for every scenario.
Before delving into more AI processes, continue to the next segment to see a demo of the ones discussed so far.