


Here's the importance of knowing how to use the target-your-response prompt engineering technique ... [+]
There is a famous expression that gracefully says this: “Onward still he goes, Yet ne’er looks forward further than his nose” (per legendary English poet Alexandar Pope, 1734, Essay on Man).
We know of this today as the generally expressed notion that sometimes you get stuck and cannot seem to look any further than your own nose.
This is easy to do. You are at times involved deeply in something and are focused on the here and now. Immersed in those deep thoughts, you might be preoccupied mentally and unable to look a step ahead. It happens to all of us.
When using generative AI, you can fall readily into the same mental trap. Here’s what I mean. You are often so focused on composing your prompt that you fail to anticipate what will happen next. The output generated by generative AI is given little advance thought and we all tend to react to whatever output we see. Upon observing the generated output, and only at that juncture, might you be stirred into thinking that perhaps the output should be given some other spin or angle.
A one-step mindset for prompt engineering entails merely and almost solely ascertaining what your most immediate prompt is going to stipulate. You omit considerations about what the output or response is likely to look like. In contrast, a two-step think-ahead perspective is more informed and anticipates what you want as the output from the generative AI. In some cases, you might even do a further multi-step think-ahead contemplation about what the generated content consists of.
I liken this to a game of chess. A newbie chess player is preoccupied with their most immediate chess move. More experienced chess players think ahead. They anticipate what response they will get. At times, a quite savvy chess player can in their mind look ahead and envision a long series of what might happen.
Let’s cast this into the sphere of generative AI and prompt engineering.
I might on a one-step mindset basis ask generative AI to tell me which states in the United States have a population count of less than 2 million people. That’s what my prompt asks for. The response might be a narrative or essay that describes the states that meet that criterion. But suppose that I really wanted a listing of the states that were in the stipulated bracket. Upon seeing the population-explaining essay, I would then have to enter an additional prompt and tell the AI app to make the essay into a listing. The listing might be shown in order of population count, meanwhile, imagine that I had actually wanted the list to be shown in alphabetical order by state name. Ergo, I enter another prompt and tell the AI to reorder the generated list into alphabetical order by the state names.
Do you see how this was a rather awkward stutter-step process?
If I had thought ahead and realized that I wanted an alphabetical listing of the states that met the criteria of having less than a population of 2 million people, I could have saved myself the somewhat exasperating effort of having to tell repeatedly what I wanted. In one fell swoop I could have simply stated my request and simultaneously indicated how I wanted the generated output or response to appear.
Not only would this save me the angst of the stutter step, but the odds are that this would be cheaper for me if I am paying to use the generative AI. You see, the odds are that a one-command one-response will be less costly to undertake when paying by the transaction or computer processing cycle time for the AI usage than if I do the same action in a series of steps or prompts/responses (your mileage may vary).
Welcome to the realm of target-your-response (TAYOR), a prompt engineering technique that gets you to stay on your toes and think ahead about what the generated AI response is going to look like.
If you are cognizant about anticipating the nature of your desired output, you can upfront say what you want when you enter your requested prompt. All you need to do is put a bit of mental effort into thinking ahead and then merely specifying your desired output accordingly in a single prompt. This is not just about formatting. There is a plethora of facets that come into play. We will be examining those possibilities in depth, momentarily.
A quick clarification.
I want to make clear that you aren’t necessarily saying in your prompt the anticipated answer per se that you hope to get. In my example about the population of the states, let’s assume that I didn’t know which states would meet the criteria. It could be just one state or maybe it would be a dozen or more. Thus, I was genuinely in the dark about which states would be identified by the AI. The answer itself was generally unknown to me.
At the same time, I knew in my mind that a list of the states is what I wanted to get. I didn’t say this in my prompt. Maybe I didn’t say that I wanted a list because it seemed obvious to me that a list was the most sensible way to present the information. I just assumed that the AI app would respond with a list. Furthermore, I knew in my mind that the states ought to be listed in alphabetical order because this would make it easier for me to inspect the list and see in an ordered fashion which states met the criteria.
The gist is that though I didn’t know precisely what answer the AI might produce, I did know in my mind the nature of how I wanted the output to be presented. If you don’t explicitly tell the AI app what you want in terms of the appearance of the generated answer, you never know what you might get. As I’ve repeatedly said in my columns and my workshops on prompt engineering, generative AI is like a box of chocolates, namely that you never know what you might get. You need to be as clear as you can about what you want the AI to accomplish.
Target-your-response is a means of doing so. You think about what the output or generated response ought to look like. You then mention this in your prompt. Your prompt then contains two elements. One element is the question or problem that you want the AI to solve. The other element that is blended into your prompt consists of explaining what you want the response to be like.
Here’s your guiding precept about target-your-response:
I dare say that most rookies using generative AI seem to entirely neglect the second part of that prompting strategy. They wait until they see the response from the AI to think about what they want the response to look like. A more seasoned prompting approach consists of anticipating what the response might be and then in the initial prompt stipulating how you want the response to look.
A useful rule of thumb consists of blending together into your prompts an indication of the problem or query along with your desired look or targeting of the response. You can do both of those in a single prompt. No need to wait until the AI has responded to the query or problem. This is a situation whereby you can have your cake and eat it too.
I’m sure that some cynics or skeptics will right away carp and say that you don’t always know exactly how you want the output or response to look.
Well, yes, that is indeed true.
But that doesn’t give you the escape clause of always avoiding stating the targeted response that you want. I am suggesting that this is a handy tool that can be used a lot of the time, though not necessarily all of the time. Sometimes you do indeed need to first see what the AI generates. After inspecting the result, you can then inform the AI about how you want to format the response or otherwise tune it up.
In my experience, much of the time you can readily anticipate what you want the response to look like. All it takes is a reflective moment. You need to stop that impulse to enter a prompt right away. Instead, take a deep breath and imagine what the response might be. Consider what your preferences for the response might be. Then, include those preferences as blended into your prompt.
The quiet and determined Zen of targeting your response.
In today’s column, I am continuing my ongoing and widely popular series on the latest advances in prompt engineering for generative AI. Sometimes I have been covering complex issues that are based on deep insights into how generative AI is mathematically and computationally devised. For this exploration of the target-your-response or TAYOR approach, to some degree, the matter doesn’t require a rocket scientist's mindset. This is more easily undertaken and easier to grasp.
That is a good thing. Sometimes the easy fixes are the best fixes. I see people all the time using generative AI that are seemingly stuck in a rut. They ask a question and then after seeing the answer are likely to be sparked or spurred into telling the AI to redo the response in some other preferred manner.
Rinse and repeat.
The sad thing is that they could bear less effort and potentially less cost by merely blending together their question and their desired look and feel of the reply.
Why don’t people tend toward doing both?
Because nobody pointed out this possibility to them.
They have formed a habit of asking a question and getting a reply. Over and over again this is repeated. Some might assume that only once the AI has generated something, such as an essay about the states that have a population of less than 2 million, only then can they then ask to make this into a list. The viewpoint of the user is that maybe the AI is not devised to handle more than one aspect at a time. It might confound the AI.
The reality is that most generative AI is well-equipped to deal with a prompt that contains a slew of instructions. A compound prompt is fine. The AI will parse each of the elements in the prompt and almost always comply. I say this with some reservation due to the possibility that your multitude of elements might be mushy or poorly stipulated. In that case, the odds are that the AI will either not produce what you anticipated or might first balk at your request and ask you to clarify what you intend.
I loath to liken this to interacting with a human since I don’t want to anthropomorphize today’s AI, but if you spoke with a human you would often provide a multitude of aspects in a single “prompt” (I mean to say that in whatever you tell the person is essentially a prompt, as it were). You would assume that the person could abide by the multiheaded elements. Of course, you can only carry this so far. If you hit someone with a byzantine array of instructions in one fell swoop, they might find this to be overwhelming.
Anyway, users of generative AI seem to swing to the extreme with generative AI in that there is a hidden belief by many novices that using a single aspect or element in a prompt is the way to go. Make things as simple as possible. Do not confuse the AI. Dribble out what you want. Keep things clean and simple.
We might be used to this due to prior Natural Language Processing (NLP) systems. The early days of Siri and Alexa taught most people that if you didn’t say baby-like words slowly and clearly, the NLP would not figure out what you wanted. You ultimately had to resort to baby talk. This was the most sensible angst-reducing strategy that you could employ.
Thankfully, most generative AI of today is far beyond that level of computational natural language fluency. You can do all manner of sloppy stipulations and the AI app will glean what you want to have happen. That’s the good news. Maybe it is great news. Perhaps it is surprising news to some.
My saying this does not mean that you can go crazy with your prompts. A prompt that is overloaded and filled with all kinds of junky wording is going to be either misinterpreted by the AI or rejected as impossible to straighten out. I assure you that prompts of a nutty nature will get you nutty results or force you into a dead-end conversation with the AI as the AI app tries to ferret out with you what you are saying. Please don’t go there intentionally.
By understanding and using target-your-response techniques, your use of generative AI will be more productive, possibly less costly, and almost assuredly less frustrating. Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the keystones of prompt engineering and generative AI. Doing so will put us all on an even keel.
Prompt Engineering Is A Cornerstone For Generative AI
As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.
For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful looks at the latest in this expanding and evolving realm, including this coverage:
Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.
Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
With the above as an overarching perspective, we are ready to jump into today’s discussion.
Digging Into The Target-Your-Response Prompt Engineering Technique
The best way to show you how to do a target-for-response technique is by walking you through the myriad of ways that such efforts are performed.
I like to keep a checklist of the ploys at the ready, often taped to my screen or available as a quick pop-up. By regularly opting to use these tips and tricks the odds are that you’ll eventually know them by heart. They will be a natural part of your prompt engineering acumen. We will be exploring them in a moment.
One vital key is to resist the compelling urge to enter a prompt impulsively. Think carefully about the prompt that you want to enter. Most of the time, make sure that a question or problem is included. And, most of the time, make sure that you specify or indicate how you want the output to appear or be composed.
I will show the target-your-response points by using a keyword to denote each one. I have placed them into alphabetical order for ease of rapidly finding a technique on the list. I have also numbered them for ease of reference. Be aware that they are not being shown in a priority order. I would assert that they are all generally of roughly equal merit. We can debate that contention and seek to provide pros and cons for each one, but I would reserve such a dispute for a future discussion rather than mudding the waters here.
The last one on the list is a special technique that I refer to as mix-and-match. I didn’t put this into alphabetized order because I find that people tend to overlook it there. By having it as the last item, albeit out of alphabetical order, it seems like people are more apt to give it proper attention. This is especially the case because the mix-and-match is a reminder that you can combine the rest of the approaches into a single prompt. You do not need to treat each point as something mutually exclusive of all the others.
I would also emphasize that you should get comfortable using each of the identified points. Take some spare moments (do we have any these days?) and try out each one. Determine what phrasing will fit best to your prompt engineering style. If you want to get into the Carnegie Hall of Prompt Engineering, you’ll need three things, namely practice, practice, practice.
This is a lengthy list and might seem daunting. Don’t worry. Many of the points overlap. All in all, the greatest value of the points is that it gets you thinking ahead of time about what kind of output or generated result you want. I’m sure that you’ll come up with other possibilities beyond those that I have listed. There are plenty more (I’ll be happy to cover more if there is expressed reader interest, thanks).
The sky is the limit, as they say.
Let’s jump in.
(1) Accurate
(2) Arguments
(3) Beautification
(4) Biases
(5) Citations
(6) Cleanliness
(7) Code Generation
(8) Coherency
(9) Complexity
(10) Context
(11) Dialoguing
(12) Elaboration
(13) Emotion
(14) Examples
(15) Explanation
(16) Formality/Informality
(17) Formatted
(18) Grammar/Dialect
(19) Guidance
(20) Headings/Subheadings
(21) Length
(22) Level
(23) Lists
(24) Logical
(25) Lyrics
(26) Multimodal
(27) Parse
(28) Persona
(29) Perspective
(30) Poetic
(31) Purpose
(32) Rephrasing
(33) Resumes/Cover Letters
(34) Spelling
(35) Stories/Tales
(36) Style
(37) Summarization
(38) Tables
(39) Translation
(40) (ZZZ) Mix-and-match
Conclusion
I trust that you can discern from these numerous prompt engineering target-your-response or TAYOR indicators that you have plenty of options and flexibility regarding generative AI-generated results. It is sad and quite dismaying that a lot of people using generative AI are seemingly unaware of these amazing and useful possibilities.
Those not in the know are essentially undervaluing and underutilizing the built-in capacities of generative AI. Imagine that you have available a tool that is like a Swiss Army knife, and you only use it in the narrowest of ways. All of those additional features and functionality could do you a great deal of good, but you only know one particular aspect. I’m not suggesting that your life depends on it. The crux is that you might as well be all that you can be, and use all that you can use.
I fervently proclaim that it is time to open everyone’s eyes wide toward getting more bang for their buck out of generative AI. As the hamburger commercials used to say, you can have it your way, cooked and adorned per your preferences.
Here then again, are the forty target-your-response techniques that I elaborated on above, and for which you can proudly pat yourself on the back for now knowing:
A few final remarks on these weighty matters shall conclude this discussion.
I stated earlier that those target-your-response techniques are just the tip of the iceberg. My aim was not to showcase every possibility, which would be impossible in the limited space that I have here. The true goal was to get you thinking about how you can leverage generative AI to best utilize the outputs or results that the AI app might produce. My sincerest hope is that your creative juices are flowing in that erstwhile regard.
Recall the famous quote about not being able to see beyond the confines of one’s nose. Lift up the gaze and look further ahead. Whenever you compose a prompt, go ahead and include an indication of what you want the generated result to consist of.
You can use these prompt engineering target-your-response techniques with the realization that these are already considered known possibilities. You are using something that is tried and true. Whatever you do, don’t get caught with the shameful remorse that you were the last to cast aside a one-step non-look-ahead mindset when composing prompts. Per the wise words of the acclaimed poet Alexandar Pope: “Be not the first by whom the new are tried. Nor yet the last to lay the old aside.”
Assuming that you do try to be with it and employ a target-your-response mantra, I will tell you right now that some of your prompts won’t cut the mustard and might not get you what you want. That’s perfectly okay. Keep trying.
Alexander Pope already has that covered for us via his endearing and most enduring famous line: “To err is human; to forgive, divine.”