Be a Prompt Engineer!
Learn to maximize the results from ChatGPT
Summary
Interested in getting involved with the latest AI craze but don’t want to write code? Consider becoming a prompt engineer! This shiny new profession plays a crucial role in developing and optimizing AI language models to generate meaningful results for stakeholders. Most, if not all of you, have spent time using OpenAI’s ChatGPT model by now. But do you know how to get the best results out of it? That’s what a prompt engineer does: leverages their knowledge of how large language models (LLMs) work to squeeze the most value out of the model for the consumers of the results.
Anyone — and soon everyone — will have access to these models and will be using them in their day-to-day activities. Few will know how to get the most out of them.
Ask good questions
When it comes to LLMs there actually is such a thing as a stupid question. The models have been trained on unfathomable amounts of data from the internet, and that’s incredibly exciting and useful in most circumstances. However, it’s important to keep in mind a couple of limitations:
- The training data is stale (usually over a year old)
- Personal questions can lead to ridiculous answers
The popular LLMs are so large and take so long to train that it’s impractical to expect they will give savvy answers to time-sensitive questions. So the question:
Why did the stock market go down yesterday?
doesn’t make any sense to an LLM that wasn’t trained on data from yesterday. You can ask less time-sensitive questions about what macro-economic conditions might lead to a drop in the stock market, but trying to pin an LLM down on questions dependent on real-time data is a waste of time.
Why did she leave me?
Unless you’re famous and provide a little more context for the above question, you’ll be sorely disappointed in the generic responses to questions like this. LLM technology is incredibly good at sifting through reams of data and distilling it down to something digestible — but it has to have been trained on data that is relevant to your query.
Get good answers
- Develop a strategy
- Create good prompts
- Evaluate responses and refine prompts
- Review with stakeholders
- Stay up to date
Develop a strategy
A prompt strategy relies on developing a deep knowledge of what the stakeholders are looking for. Many times your clients won’t be exactly sure what that is — and the process of revealing that may require a lot of patience and time on your part. I find it’s best to give this stage plenty of calendar time, meaning that many brief meetings and emails over a month (giving the participants time to think) may result in much more clarity in the goals (than trying to rush it).
Create good prompts
I’ll illustrate a working example in a following section but know that this stage is critical to justifying your value (and position) as a prompt engineer. You need to develop a working knowledge of how to maximize the relevancy and value of the answers based upon your queries. Sessions with an LLM maintain context so many times I find cutting & pasting sections of different answers (to get the final, comprehensive answer I was looking for) is the best approach.
Evaluate responses and refine prompts
It’s your job to evaluate the answers you’re receiving and screen them for any biases they may contain. Realize the text that the LLMs have been trained on was biased — you need to learn to recognize these biases and filter them out of your final presentation.
Review with stakeholders
Reviewing the answers you’re getting from an LLM with interested parties should be approached as a collaborative and iterative process. Sometimes good answers lead to more questions, so don’t be surprised to see a specific project go on much longer than originally planned.
Stay up to date
Assume that everyone you’re working with has at least played with OpenGPT and other LLMs, and sometimes even begun to incorporate them into their workflow. They’ll be looking to you to stay abreast of the latest developments and versions as they roll out, so set aside some time each day to sift through the volumes of noise looking for nuggets to share with your coworkers.
Example
Let’s work through an example together that illustrates the importance of context in sequential questions/answers (meaning that the “whole” output of your session taken together and refined is greater than its “parts”) and specificity (being precise in what you’re asking of an LLM).
I’ll omit the answers I received in this session since yours will differ, but you’ll still get the point by following along. So please open a ChatGPT session now and copy & paste the questions below so you can see how LLMs respond to this line of inquiry.
Why should I go to Disneyland?
A vague but good starting point for a session.
Why should I take my 8 year old son to Disneyland?
Now we’re getting more specific about our motivations.
Why should I take my 8 year old son who is 48 inches tall to Disneyland?
Now we’re including domain-specific constraints that would only be specified by someone (a domain expert) that knew height could be used to limit which amusement rides a child was allowed to ride.
What is the best time of year to take my 8 year old son who is 48 inches tall to Disneyland?
We’re drilling down now beyond the initial questions to begin planning a specific trip.
What would a 3 day itinerary look like if I wanted to take my 8 year old son who is 48 inches tall to Disneyland during the time of year when the park is less crowded?
Notice we’ve included parts of a previous answer (about the park being less crowded) in this above query.
It’s amazing how helpful the itinerary I received was from the above question was. It’s a great illustration of why so many people are hyped about this latest incarnation of AI.
What would a 3 day itinerary look like if I wanted to take my 8 year old son who is 48 inches tall to Disneyland during the time of year when the park is less crowded, and how much would it cost me?
Cost is always a concern and it’s interesting to see how OpenGPT responds helpfully (with albeit slightly out-of-date prices) to such queries.
Take a course
One of the world’s leaders in AI, Andrew Ng, has recently announced the availability of a course specifically on this subject! Why not check it out? I spend a lot of my learning time surfing the internet, reading articles, and watching YouTubes. But when I want to dive deeper, I don’t hesitate to sign up for a class (or buy a book). Two of my favorite learning sites are Udemy and Coursera. I haven’t taken this specific class (yet), but I bet it’s a treasure trove of good information.
Conclusion
Generative AI is currently in the “emerging” state but won’t stay there for long. Soon it will be embedded in anyone’s workflow that can be enhanced by it. Throw your hat in the ring and help make this one of the most impactful technologies in decades!
- Ask good questions
- Get good answers
- Develop a strategy
- Create good prompts
- Evaluate responses and refine prompts
- Review with stakeholders
- Stay up to date