How can we get the most benifits out of AI?
Imagine that you're solving a critical problem now you need an extra hand for it & for that help, you visit Chat GPT you spend some time to think and write a prompt. But you give basic random prompts to Chat GPT, so it isn't able to help you properly and in return it provides you a very generic & surface level information.
I know many of you don't write prompt like this. You tweak your prompt according to your experience and through this tweaks, many -a- times magic happens. Now by writing any prompt according to a specific need, you get very good results.
Prompt Engineering
The process of crafting precise instructions, called prompts, to guide generative AI models, like large language models (LLMs), towards generating desired outputs is called propmpt engineering.
Now, many people do it but most of the people - 'shoot an arrow in the dark'. Now with that, sometimes you get very good results, sometimes you get very bad results. Now you often write a bigger prompt according to your experience, add more details you bring a good result. But you don't have the idea behind it - that why this exactly works? And that is the major different between you & a professional prompt engineer.
Now, using AI means by giving AI a better prompt, you should be able to make your work more efficient & better. That's why prompt engineering is such a subject that everyone should learn. Whether you're in first year, second year, 12th class whether you already have 10 year experience - whatever you're doing. You should learn prompt engineering. It is going to help you out for sure. And you already know how important communication is & along with it, communicating with AI is getting equally important. And that is why prompt engineering is really important for you.
Now a question might be rising that where should I start these things from? This post is for you. Here we'll learn basic prompt engineering. Now before we start getting into more details, first you need to understand the basics of Generative AI - its capabilities.
So first let me explain that, let me give you an example. A bad prompt is vague and lacks specific details, like "write a blog post." A good prompt provides context, instructions, and desired output, such as "As a marketing specialist, write a blog post about the benefits of social media for small businesses, aiming for a concise and engaging read for a general audience." So this is prompt engineering. Now all your LLMs - LLM means Large Language Model - the more better step by step prompt you give them you'll get a better result like it. So we're going to learn how we can form our prompt, how can we provide them so that, you can get good results. Now all the generative AI models take things literally whatever you write them, they'll bring you the exact result of it. They doesn't guess anything on their own. So however vague input you give them, chances to get similar vague output increases. But if you give them detailed instructions, then will you become prompt engineer.
Now instead of giving them vague inputs. We need to focus on 3 things. What are those 3 things? First is - Persona, second is - task and finally - context. So whenever you talk to any LLM, whenever you write any prompt
then you need to keep these 3 things in your consideration. First, whatever model you're talking to be it Claud, Chat GPT, DeepSeek you need to give it a Persona. What would that persona be? We'll discuss that further as well. Secondly, task - that what work needs to be done and finally, context. It has to be given context that - what is are things, what is going to happen.
Now you can compare generative AI with a mimicry artist. The more you teach it, the better context you give it, you'll be getting better output. Now you can see these 3 categories as building blocks and start being creative. You can try different prompts to learn how the results change when you actually give it persona, task & context. And you'll learn by practicing these things repeatedly.
According to the recent study of Japanese University that if you add one more line - 'let's think step by step' - then it'll give you even better result. Now for example ask it - 'what is the meaning of life?' Then it'll give us an output. Right! It added many things. But now if I give Chat GPT the same prompt but with a better - just add one more line - 'Let's think step by step.' Then the output, like the overall thinking & everything increases. This step by step approach shows that rather than finding a universal answer the meaning of life is something you construct through introspection & your interaction with the world. Can you see, how deeply Chat GPT has thought about it!?
Now you might be thinking that prompt engineering is very complex. But in most of the places, simplicity is required rather than complex. How simply can you divide it into multiple steps, tell about it, you can explain your problem & if you're able to tell the type of result you want then you'll get to see better final results. For starter, just focus on two things. First is, multi-step prompting & the second one is adaptive prompting. As you know, in multi-step you need to give 3 things. First is persona, then task, then context as I've told you many times earlier as well.
In adaptive prompting, you need to add little more context with existing information with the help of which, Chat GPT or any of your LLM can analyze the information in an even better way, it'll be able to fix problems & give you good output. Now the 3 basic categories that is like, more than enough for anyone who is starting from the beginning. Or if you're doing day-to-day tasks then, but if you're an aspiring prompt engineer then we need to study little more advanced things. Now we've talked about core things - there're 3 core things in advanced prompting.
You need to add things in detail in this - I'll tell you this slowly further. But first is persona, second is task & third is context. I'm telling you this so many times so that it sets in your mind. Whenever you write the next prompt then you would use these 3 pillars and write better prompts.
Persona
Now our persona is divided into further 4 more sub categories.
- Expertise Level
- Role Based Persona
- Personality & Tone
- Biases & Ethical Constraints
It is divided in all these things. For example, in the first one - if we talk about expertise level that you're a beginner, you need to think like one if you're expert or in medium level - so you need to add those things. For example, if you're solving a software engineering problem you can just add experience that you're 10 year experienced so it'll think like a 10 year experienced. Or if you're learning, you can ask Chat GPT to say that you're a beginner & think like a beginner. And let answer questions to me. This is number one which is your sub category of persona i.e. expertise level.
Second is - Role based persona. For example, you're a software engineer - first you told the experience about how much time have you spent, then you told that you're software engineer. Or you're a singer or you yourself are a prompt engineer. In this way, you can give it different persona. depending on their role.
Third is personality & tone. Now if you don't add personality in anything, for example if you're making it write content then with adding personality & tone, you'll get very robotic answers. That is why personality & tone is really important.
Last but not least, its not used much. But it was, biases & ethical constraints. Now, for example if you're writing things around particular topic you don't want biases in it or if there're ethical constraints about which you can't talk then we should also give that context to chat GPT or any LLM that you're using. Now in this case, if you're talking about a social issue then tell it that which things you need to avoid & you'll get the answer in a more neutral way.
Task
Now as persona has sub categories, similarly task also has sub categories. After task, even context has sub categories.
- Response Type
- Output Formatting
- Depth & Length Control
- Creativity vs Precision
As you can see here, what the sub categories of task would be - response type you need summarization, explanation, creative writing, comparison. In summarization - summarize this article into 3 bullet points. Explain in simple terms that like 'I'm a 5 year old' or write a sci-fi short story in the style of Isaac, right! And similarly, you can add names. Chat GPT will have idea o how to write. Finally, compare the iPhone 15 and Samsung S24 based on features.
So you're giving it different types of persona. Then comes, output formatting, depth & length control and creativity vs precision. In output formatting, you can tell the format in which you want the output. Now for example - give me a JSON format, write in bulleted list format, or generate a table with the difference between Python & Java. So you can ask the type of output you want. Then comes, depth & length of control. If you want answer in one sentence, detailed or tweet sized you can give context of which type of answer you want. And finally, creativity vas precision. You can adjust to how rigid & creative you want. Be highly factual or be creative, add some metaphor or add your understanding.
Context
Context sub categories.
- Domain Context
- Prior Information
- Target Audiance
- Cultural & Regional Adaption
- Style & Mood Context
Define the industry or the field the AI should focus on. Carry over precious instructions or context. As you can see, you can read everything here.
In domain context, you can tell - Explain AI in the context of healthcare - if you're healthcare professional, so tell according to that. Generate a marketing strategy for an e-commerce startup. Talk about cybersecurity risks in cloud computing. So you gave it more context that here, we're only talking about cloud computing, related to cybersecurity. For example, we talked about marketing. Only around e-commerce startup.
Similarly, you'll see prior information has been provided - using the last example you gave me now apply it in finance or remember the company I mentioned earlier whatever context you gave earlier, whatever you talked about before you're resurfacing it again. Then comes - explain the topic for 8th grade students.
Final - what is your target audience. Your output changes a lot according to it. As lot of people say that, explain this to me like a 10 year old. So that simple words are used more, simple examples are used more, right! So similarly you can do that with AI.
Then comes, cultural & regional adaptation - ensure AI localizes responses for different audience. Explain in Hinglish with Indian cultural references. It'll tell you in Hinglish, you'll get the output. Use relevant to European startups like etc. etc.
Similarly, in style & mood context - fine-tune writing styles or emotional tone. Make it sound like a TEDX talk speech. If you're having an article written or answer - you can have it in TEDX format. And something similar, you can give the context - write in an emotional and inspiring way keep it neutral or purely analytical. So these are some of the examples of context. Still, our prompt is formed by these 3 things. First we'll give it persona, then task and finally, context.
Context can be of different ways, task can be of different ways & as well as persona can also be of different ways. Now you won't be able to learn all these in one day. You'll take time to implement, adapt these things. Which sub category of context should be used & where, which sub category of task should be used & where but the more you use, the more you prompt, the better you get now since we pushing the boundaries so much.
Highlights
💡 Prompt Engineering Essentials: Understanding how to write effective prompts is key to maximizing the utility of AI tools.
🚀 Generative AI Opportunities: Learning prompt engineering opens doors for improved efficiency, potentially leading to higher earnings in tech and creative industries.
🧱 Three Pillars of Prompting: Effective prompts consist of persona, task, and context, influencing the quality of AI-generated results.
🔍 Multi-Step Prompting: Breaking down complex prompts into smaller, manageable parts is the foundation for getting precise outputs from AI.
🎨 Advanced Techniques: Concepts like chain of thought prompting and few-shot prompting can enhance creative outputs and problem-solving.
🎯 Iterative Learning: Continuous practice and refinement of prompts lead to progressively better results.
💻 AI Integration in Workflows: Mastering prompt engineering is essential for anyone keen on leveraging AI in daily tasks to remain competitive in a fast-paced world.