


Using hints in your prompts as part of your prompt engineering repertoire is a clever and fruitful ... [+]
A hint is all you need.
Well, sometimes.
We all make use of hints quite frequently in our daily lives. Perhaps you might make a subtle hint to your boss that it is about time for you to get a raise. That’s a hint that you avidly hope will come true. Let’s try another hinting-laden situation. While driving your car, a family member in the passenger seat stridently drops a heavy-handed hint that you seem to be going a bit fast. You are annoyed by the hint and decide to pay it no mind.
And so on.
Robert Frost, the famous American poet, said this about hints (particularly when used in a family context): “The greatest thing in family life is to take a hint when a hint is intended, and not to take a hint when a hint isn't intended.” It would seem that this sage advice applies to all manner of hints, going far beyond those of a familial nature.
Consider where and when we generally make use of hints. For example, per the above poetic adage, we shouldn’t find hints where none exist. Another rule is that we should at least entertain a hint when it is purposely proffered to us. A slew of rules of thumb and ad hoc policies exist about the do’s and don’ts associated with hints.
Let me drop a perhaps surprising hint on you, namely that hints are significant for AI too.
Allow me to elaborate.
In today’s column, I am continuing my popular series on the latest advances in prompt engineering, doing so in this discussion by focusing on the pragmatic role of including hints as an integral element of your use of generative AI. Infusing hints into prompts can be highly advantageous. I’ll explain why and how this is best undertaken. A formal catchphrase used for this is a technique known as Directional Stimulus Prompting (DSP).
One thing that is a keystone of generative AI is that the composing of prompts is both science and art. If you do a lousy job at coming up with prompts, the odds are that generative AI is going to generate answers for you that are underwhelming and potentially off-target. The happy face of this is that if you can compose good prompts, there is an erstwhile possibility of getting generative AI to produce extremely useful results for you.
Hints demonstrably come into this milieu.
Hints can play a substantial role when you are entering prompts into any and all generative AI apps, including those such as ChatGPT, GPT-4, Bard, and the like. A rarely known and yet superb technique for those who avidly practice prompt engineering best practices is to leverage hints as part of your prompt employing strategy. A hint can go a long way toward getting generative AI to provide you with stellar results.
I dare say that a lot of generative AI users do not realize that hints are vital and ought to be strategically leveraged in their prompts. That’s a shame. A darned shame. The use of hints when well-placed and well-timed can spur generative AI to emit better answers and attain heightened levels of problem-solving.
Yes, there is gold in those AI hills that can be found at the feet of proper prompting hints.
Hints ordinarily fall into the category of good prompting, though only if you know how to devise hints and make use of them properly. I say this because you can easily mess up when trying to give hints to generative AI. I don’t want to imply any anthropomorphizing of AI, but in a sense there is a heady possibility of generative AI misconstruing your hints, ignoring your hints, or otherwise acting in ways that we think of as reserved for humans reacting to hints. Keep clear in mind that the AI is working computationally and not in a sentient capacity.
A brief side tangent is worthy here.
Please know that the AI of today is not sentient. Period, end of story. Don’t fall for those banner headlines that seem to suggest that AI is sentient or on the verge of sentience. That’s just not so. The reason that generative AI might respond to hints in ways that seem human-like is entirely due to computational pattern-matching that has examined and attempted to mimic human writings. By having scanned tons and tons of human written materials on the Internet, a computational pattern-match of how we use words can come across as though the AI is acting in human-like ways.
Give serious and mindful consideration to the voluminous written material that has been computationally scanned and patterned to data-train the generative AI. Amidst that sea of text were undoubtedly vast quantities of hints and the use of hints. That content became fodder for computational pattern-making.
My point is that generative AI can likely detect and respond to hints due to the many patterns analyzed and mimicked based on humans employing hints. Ergo, do not conflate this with sentience on the part of the AI. It is just an impressive and computationally immense capability shaped around human writing.
Back to our compelling interest in hints.
Let’s first cover some overarching essentials about hints and then smoothly shift into exploring ways to use hints when you devise your prompts. I will showcase numerous examples to get you into the mindset of regularly using hints as part of your prompt engineering prowess and personal prompting toolkit. In addition, we will examine some of the latest research on prompt engineering and especially advances associated with the use of hints or DSP when prompting.
I would like to hint to you that you should fasten your seatbelt for this heady discussion, but perhaps instead I will just come right out and tell you that this is going to be a wild ride and therefore alert you that it would be best to buckle up. Sometimes the direct path to communication is indeed best. Other times, the indirect or hinting path is best. The situation and circumstance at hand determine which direction is the best way to go.
Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the foundations of prompt engineering and generative AI. Doing so will put us all on an even keel.
Prompt Engineering Is A Cornerstone For Generative AI
As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.
For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful explorations on the latest in this expanding and evolving realm, including this coverage:
Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.
Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
With the above as an overarching perspective, we are ready to jump into today’s discussion.
How Hinting Is Vital As A Best Practice For Prompt Engineering
Let’s first explore the nature of hints in general. Doing so will be handy and provide a suitable foundation for using hints as a prompting strategy for attaining top-tier results with generative AI.
A hint is typically an indirect way to communicate something that you want to say but that you are hesitant to outrightly say. The hesitancy might be due to potential discomfort. Perhaps gently dropping a hint will save face for you and likewise save face for the other party receiving the hint. A hint might be of greater aplomb than launching into some lengthy diatribe. We’ve all used hints in this manner, many times, usually in the customary course of our hectic lives.
A hint can also simply be a shortcut means of communicating. In that sense, hints can also possibly be a time saver if used suitably. Suppose you and a friend are playing tennis as a twosome team. The easiest and fastest way to let your partner know that they should be watching for the tennis ball might consist of a quickly shouted hint. You don’t have the time and available breath to yell a full stream of exhortations and instructions.
There are additional reasons for hints to be used. A tutor providing a hint to a pupil who is supposed to be doing their homework might reveal a subtle hint for a decidedly beneficial purpose. The use of a hint in such an instance is intended to aid the student and serve as a spark toward them being able to solve a knotty problem largely on their own. It is the proverbial adage of nudging someone to learn how to fish rather than doing the fishing for them.
If you were to give a contemplative mindful moment to the inherent value of hints, I’m sure you would ascertain that they are at times the essential communicative grease that keeps the world smoothly running (when used proficiently).
Let’s agree then that generally a hint can do these two major things:
The usual approach consists of a twofer, namely that a provided hint is aimed to be a directional indication and simultaneously an informational indication. That being said, it is possible for a clue to only be one of the two. The context will be a factor and so will the targeted recipient.
We have to acknowledge that a targeted recipient might completely miss a given hint. The hint can go over someone’s head. They don’t realize that a hint was provided to them. That can be exasperating to the person handing out the hint. The odds are that if an initial hint doesn’t do the trick, additional hints will need to be conveyed. Sometimes the hints do not score a touchdown and the last resort consists of outright telling the recipient the whole confabulation and opting to give up on the hinting gambit.
Russell Page, the legendary British landscape architect, said this about hints: “A discerning eye needs only a hint, and understatement leaves the imagination free to build its own elaborations.”
Keep that in mind when someone gives you a hint or when you share a hint with others.
Shifting gears, let’s delve into using hints with generative AI.
You can readily and advantageously use hints in your prompts and as part of your prompt engineering toolkit and all-told acumen. The idea of using hints in a prompt is that you might want to clue the generative AI about what you hope to get as an answer from the generative AI. Thus, a hint included in your prompt can once again do two things, be directional to the generative AI and also possibly be informational to the generative AI.
Here’s a recommended best practice. I prefer to make it abundantly clear that I am giving a hint within my prompt. This consists of my always openly telling the AI app that I am providing a hint.
Contrast this practice to hints and people. If you are giving a hint to a human, we all know that a human might get quite upset at being given a hint. The person might feel belittled. They might go into a rage. All kinds of unfortunate reactions can occur.
Generally, when you explicitly tell generative AI that you are providing a hint, the AI app will readily embrace the hint. No outbursts. No protestations. As an aside, this is not by happenstance that you won’t get a negative reaction. You see since the generative AI was data-trained on vast volumes of human writing from the Internet, there would naturally be a strong statistical possibility of the AI responding negatively as per mimicking the written efforts of humans. But what has typically been done by the AI maker entails a pre-release filtering process to data-train the AI app to avoid being irksome in that way (a process often using RLHF, reinforcement learning with human feedback, see my discussion at the link here).
I’m not saying that you won’t ever get any pushback from generative AI when you provide a hint. It can happen. The chances are that any such negative feedback will be of a mild caliber. Perhaps the AI might indicate that the hint wasn’t stark enough to be useful. Or the hint was calculated as irrelevant. You seldom will get a response that says the hint was outrageously vacuous or otherwise stupid. Only a somewhat unfiltered generative AI is bound to emit that kind of a response, see my coverage about such AI at the link here.
Examples Of Hint-Oriented Best Practices When Using Generative AI
We will next explore a series of examples involving the use of useful hints in prompts.
First, let’s look at using a single hint and see how doing so can make a sizable difference in the answer that you might get from generative AI. I am going to ask generative AI about a potential foot race between a tortoise and a hare. Initially, I won’t include any hints at all. This is just a straight-ahead question for the AI app.
No hint:
Notice that the AI answered by telling me about the famous fable involving a tortoise and a hare.
There wasn’t anything in my prompt that referred to the fable. I didn’t do anything that steered the AI in that direction per se. Of course, I would nearly bet that any person who was given the same question would almost surely assume that I was invoking the famous tale. The AI app did the same, though please realize this was not due to sentience and simply due to pattern-matching based on human writing that the AI was data-trained on.
Suppose that I had anticipated that the AI app might inadvertently assume my question had to do with the fable. I could have written the question to carefully explain that I am not referring to the fable. A possible downside here is that if I use the generic word “fable” doing so could spur the AI app into going into a fable-focused context, despite my also insisting that my question has nothing to do with the fable (yes, you can cause a context to be invoked, merely by saying you don’t want that particular context!).
Rather than having to hassle with composing a lengthy indication, I will just use a quick hint. Furthermore, I will make sure to label the hint as a hint.
Here is a single hint at the end of my posed question:
All that I did was provide a hint indicating the “real world” and you can plainly see that the AI app figured out what I wanted. The generated answer refers to the real-world aspects of a tortoise racing against a hare.
My hint was short and sweet.
Some might argue that I didn’t have to necessarily label the hint by stating explicitly it was a hint. Indeed, admittedly, much of the time a hint can be more informally tossed into your prompt. For me, I prefer to mention that a hint is being given. I find this easy to do and am more likely to prod the AI app as accepting my hint as a hint.
Another best practice when using prompting hints consists of listing together several hints at once. If feasible, you can use a series of keywords. Make sure that you choose your keywords carefully. A keyword that is overly ambiguous or that might have multiple interpretations can lead the AI astray from where you want to go.
Let’s see how multiple hints can be used, and do so to the benefit of the answer that I want to get from generative AI.
I am going to start by asking generative AI about Abraham Lincoln and provide no hints at all:
After taking a look at the generated response, suppose I discover that some important facts about Lincoln were not included in the short essay. I will therefore ask the question again and this time include some hints of subtopics I wish to have included.
Multiple hints are given at the end of my instruction:
Sure enough, you can see that the resulting essay has mentioned facts regarding my three hints, namely mentioning that Lincoln was born in a log cabin, plus his having been a lawyer and working in Illinois. I trust that you can see how seamlessly the hints worked. The AI app didn’t somehow make a big deal out of my hints in the sense of dramatically declaring that the hints were received and utilized. Instead, the essay reflects an interpretation of my efficiently provided hints.
Easy-peasy, so far.
Turns out that hints are a dual-edged sword when it comes to prompting.
Here’s what I mean.
In the example about the essay on Lincoln, you might have stealthily noticed that the first essay mentioned his assassination, a prominent topic when discussing the life and times of Lincoln. The second essay did not do so. I am guessing that since I had restricted the answer to one paragraph in size, and since I had provided hints on other subtopics to be included, the result was that something had to give way in the response. You could argue that my hint caused an inadvertent adverse outcome if indeed I was hoping or assuming that the assassination subtopic was going to be included in the second essay.
We can further pursue the downside of hints to serve as a heads-up of what to watch out for.
I next opt to ask a question that has no hints and relates to the famous riddle of the Sphinx:
The answer generated is a classic. The answer is right. But imagine that I want to test out hints and see what happens if I give a hint that perhaps goes a bit overboard.
Here I try the same question and include a hint that says “not a human”:
My hint was rather strong in the sense that I came out and implied that the answer could not consist of being a human. If you gave such a hint to a person, I am betting they would be stumped at what kind of answer you wanted. We all know and accept that the answer to the riddle is in fact a human.
Look at what generative AI emitted.
The AI app said that the answer to the riddle is that the riddle itself fits the stated criteria. Do you consider this to be a creative response or is it a nonsensical response? You might assert that the answer is creative and at least is a seemingly valent attempt to comply with the hint. On the other hand, you might argue that the answer is zany and out of line.
An overall difficulty with most generative AI is that the AI app will often not push back when perhaps it should. For example, the AI could have responded by saying that there is no other valid answer other than the answer of being a human. You will rarely get that kind of reply, as I’ve mentioned earlier herein. The tuning and data-training of generative AI by the AI makers is often done to induce generative AI to generate an answer even if the answer would seem marginal or highly questionable.
I repeatedly forewarn in my workshops on prompt engineering that generative AI is like a box of chocolates, notably that you never know exactly what you will get. The under-the-hood algorithms used for generative AI usually invoke a probabilistic and statistical undertone to the pattern-matching. The good news is that this means that the response that generative AI gives to you is going to be somewhat unique and unlike all other responses that it has previously given. The bad news is that you cannot predict precisely what the AI will emit.
The roll of the dice comes into play when using generative AI.
You might have noticed that I have been mentioning that hints can vary in terms of their degree such as being weak or strong hints. As a best practice, go ahead and employ strong hints if you are sure of what you want and you also are aiming to spur the AI to a particular desired target. Use weak hints if you are willing to allow the AI some latitude and possibly have the AI come up with something that you hadn’t totally anticipated.
We can playfully use the legendary sorites paradox as an example illustrating the use of strong versus weak prompts. You might recall that the sorites paradox has to do with a heap of sand and gives rise to interesting considerations about the vagueness of everyday language.
Here is my question with no hint included:
The generative AI response has landed squarely on the sorites paradox answer, suitably so.
A big issue about the paradox is whether we can solve the paradox by establishing a threshold for what defines a heap. For example, I might decide that if you take more than 10,000 grains from a pile of sand that had one million grains, it no longer is considered a heap.
I will give a weak hint about this turn of events:
The weak hint wasn’t enough to clearly get the AI in the direction of realizing that I am trying to establish a new definition for a heap, or at least provide a clarification for the definition.
Let’s try again but this time with a stronger hint:
Voila, my stronger hint seems to have done the trick.
Research On Leveraging Hints For Guiding AI
The notion of using hints to steer or directionally guide AI has been around for a long time. There was a period of time in the 1990s when hints were on the hot list of things to consider when devising artificial neural networks (the same technology that underlies today’s generative AI). One of the classics from that time period was a research paper entitled "A Method For Learning From Hints" by Yaser Abu-Mostafa, appearing in Advances in Neural Information Processing Systems, published in 1992.
Here are some salient excerpts:
Zoom forward to current times and the notion of using hints has further expanded to include the use of hints as a prompting strategy for generative AI. A formalized way of expressing a hinting approach to prompting is the weighty moniker of Directional Stimulus Prompting (DSP). The naming does make sense. You are using prompts to essentially act as a stimulus to the AI app. The stimulus serves as a directional guide. I would also add that the stimulus or hint can be informational too, as mentioned earlier herein.
Let’s take a quick look at a recent research paper entitled “Guiding Large Language Models Via Directional Stimulus Prompting” by Zekun Li, Baolin Peng, Pengcheng He, Michel Galley, Jianfeng Gao, and Xifeng Yan, posted online on July 7, 2023.
They say this about the use of hints or directional stimuli:
According to their experiments, they found that the prudent use of hints or DSPs was instrumental in improving generative AI performance. They gave various examples. A specific focus covered an in-depth exploration involving summarizing articles via the use of hints.
Let’s see how this works. For the sake of space here I am not going to show the full article that I am about to mention and refer you to the research paper to read the referenced article along with perhaps perusing additional and many intriguing facets of their study.
They asked the generative AI to summarize a given article and do so in a few sentences (no hints provided to the AI):
The full article contains a lot of additional information. The generative AI app opted to select some aspects and not include other aspects. This presumably was done on a semi-random basis by the AI and also was undertaken because the prompt requires that the summary be squeezed into just a few sentences.
A new prompt was devised that contained several hints, such as a hint that TV should be mentioned, the date of April 1 should be mentioned, the year 2007 should be mentioned, and the number 91 should be mentioned. Here is the same prompt as above but with their hints included:
By and large, the new version of the produced essay contains content encompassing the stated hints.
The researchers characterized the devised approach in this manner:
Where Do These Various Hints Come From
One means of coming up with hints for inclusion in your prompts is by using your own noggin. You ought to think ahead of asking generative AI a question. Anticipate what the AI might need to suitably address your question.
Ask yourself this: “What do I need to indicate to the AI to sufficiently clue in the AI about what I want?”
Some portion of your prompt can be the question that you want to pose. You can also then include additional detailed information that might be fruitful for the AI. Or you can just try using hints. It is all up to you.
A hint might be easier and faster to come up with and might be easier and faster to enter into your prompt. Also, you might be desirous of seeing what else the AI derives, being given greater latitude as a result of a hint versus the more delineated and restrictive detailed info you might have provided.
Will the hint always do the trick? No. In some circumstances, a hint won’t be enough. You might need to spell out in greater detail what you really want. Hints need to be used judiciously.
Another means of gleaning hints is by having AI or some automated tool produce hints for you. The research study noted above had made use of a customized specialized tool to generate hints. You can expect that generative AI will gradually be augmented with added capabilities including being able to generate on-the-fly hints for you to use with generative AI. I realize that seems circular, but it does make abundant sense, see my discussion at the link here on advances in using AI to improve the use of AI.
Hints can arise via either or both of these means:
In the second instance of using AI or a tool to derive hints, you can further subdivide that category into two major classifications. One is that the AI or tool presents a suggested hint or set of proposed hints to a human, and the human then decides whether to use those hints in their prompt. The second case is when you automatically have the AI or tool create prompts containing hints and do so without a human directly in the loop.
Having AI Give You Hints Instructively For Your Benefit
In my herein discussion up until this juncture, the mainstay of doing hinting-related prompting goes in a one-way path of a hint being given to the generative AI. You enter a hint to the AI. The AI then hopefully processes the hint suitably and accordingly alters or produces a response because of having examined the hint. My usual caveats apply, namely that you always have to be on your toes that the AI might miss the hint or otherwise misinterpret the hint.
Time for a bit of a twist to the aforementioned one-way path.
Are you ready?
We might at times want a two-way street.
Here’s the deal. I’ve previously covered in my columns the so-called flipped interaction that you can have with generative AI, see the link here. A flipped interaction involves having the AI ask you questions. This might seem like an odd thing to do. The basis for having the AI ask you questions can include a number of useful purposes, such as doing data training of the AI on a particular topic or possibly having the AI test you on a topic of interest if that’s what you want to do.
The bottom line is that you can at your discretion opt to explicitly ask the AI to give you hints on topics.
An example will illustrate this.
Envision that someone is desirous of changing the oil in their car. They have done it many times before. They know generally what to do. But, they are rusty at doing so. A bit of a quick refresher might be handy.
Here’s what they might ask generative AI:
The answer is obviously lacking in detail and pretty much contains hints (which is what the question asked the AI to do).
Here’s what a more detailed answer might have been (averting asking for just hints):
Compare the first answer with this second answer. The first answer which was based on the request to provide hints is much more succinct in comparison.
In summary overall, we have these two circumstances:
Make sure to consider the option of getting hints from generative AI.
Doing so can be beneficial for you. The odds are that the AI will otherwise fully explain things in great and at times excruciating detail. If you are preparing for a test or exam, maybe it would be wiser to try and solve any questions or problems by having the AI give you mere hints. One might assert that using AI in such a hinting manner is helping you to learn to fish, in lieu of doing the fishing for you.
Conclusion
Can you take a hint?
I’m sure there are times that you’ve gotten riled up when someone gave you a hint. We often consider a hint to be an insult. It is as though the other person thinks we are so dense that we cannot figure out something on our own. There are times when a hint can be used in a derogatory or denigrating manner. No doubt about that.
I would dare say though that much of the time a hint is given in the sincerest of intentions. The person did not aim the hint as a particularly offensive remark. They were just trying to be helpful. Nonetheless, the recipient might believe that their personal honor has been besmirched and denounced.
All of those complications do not particularly arise when you give hints to generative AI. The generative AI is a relatively free-hint zone. You can freely give hints to your heart’s content. The AI won’t go berserk or scream obscenities at you. This is a delightful environment in which hints can fly like an eagle. No boundaries arise.
I urge you to include hints or directional stimulus prompts in your prompt engineering capabilities. You’ll be better off for having done so.
That’s a surefire hint.