THE AMERICA ONE NEWS
May 30, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Forbes
Forbes
16 Oct 2023


Businessman hand create artificial intelligence for technology transformation and internet of thing on tablet. Cloud technology will management big data include business strategy , customer service.

Maximize the use of generative AI by not dumbing down your prompts.

getty

I’m sure you’ve seen or heard people making gibberish childlike sounds when speaking to babies.

We tend to do so since we believe that the infant cannot otherwise comprehend our intelligible words. Various studies suggest that you ought to consider speaking normally to such youngsters as doing so can aid them in becoming tuned to natural language. They can potentially pick up on the cadence of regular English and begin to identify patterns such as where words start and end, where sentences start and end, and the like. You don’t have to always use everyday wording, but it seems prudent to intersperse those fun-to-make baby sounds with bona fide speech. Plus, maybe doing so will keep you sane and not cause you to lose touch with collegial adult lingo.

The gist of the situation is that at times we opt to dumb down our utterances.

One place where dumbing down is definitely a likely pitfall involves interacting with contemporary generative AI. Yes, it turns out that a lot of people using generative AI such as ChatGPT, GPT-4, Bard, Claude 2, and other akin AI apps fall into the trap of conversing with the AI in a dumbing down mode.

A person using generative AI often tends to restrict their wording to the simplest possible words. They enter as prompts a curt statement or overly brief question that may consist of a handful of words at most. I wouldn’t say that this counts as gibberish, though it is so short on substance that the phrasing appears as if you are adversely stuck in a desperately low communication mode or perhaps paying mightily for each character painstakingly entered into the AI.

In a sense, you are not to be blamed for your habit of having to convey your messaging in the shortest of phrasings. Blame the prior generation of AI and the less fluent natural language processing (NLP) capabilities that we have all endured for the last many years. Anyone who at first got excited about using Alexa or Siri was bound to pretty quickly become frustrated and altogether anguished. Whereas you might have been led to believe you could interact fluently, the reality was that you had to learn to constrain your commands and utterances.

It was and continues to be a nearly unbearable task. You might want to say that the AI should go ahead and raise the temperature to seventy-five degrees via your in-house temperature control device, meanwhile, the interpretation is that you said to turn on the outdoor porch lights. Frustrating and beguiling. Your only recourse was to dumb down what you utter. Speak slowly, one word at a time, and use the least amount of words possible. The words chosen have to be simplistic else the AI will get the whole string of words turned upside down and confabulated.

Okay, so we all opted as sentient beings to dumb down our discourse with AI.

Along comes generative AI. This type of NLP is heads taller than the prior versions. You can convey your thoughts in full sentences. Furthermore, the sentences can be rambling or otherwise filled with all manner of fluff. Generative AI can usually ferret out what you are saying or trying to say. No longer do you particularly have to speak the lingo of the machine. The machine generally speaks your lingo (well, within various boundaries).

In today’s column, I want to concentrate on the problem that people have been trained or self-trained to dumb down their interaction with AI, which is no longer needed per se when using modern generative AI. I realize you might be tempted to say that this is a no-harm and no-foul kind of condition. If you want to communicate in choppy short words, you can certainly do so. The big downside is that you are undercutting the true value of using generative AI. You are inadvertently shooting your own foot.

The bottom line is that if you interact with generative AI more fluidly, the odds are immensely heightened that you will get much better results. The essays you get generated are almost for sure going to be of a higher quality and closer to whatever you had in mind to obtain. The problem-solving by the AI is likely to be more surefire. Etc.

I would also add that your sense of well-being is decidedly going to rise. Here’s why. If you spend any substantial amount of time using generative AI and if you always have to be tricky and enter prompts of the shortest possible exchange, the chore is going to wear on you. A session with generative AI will seem to be endless and greatly tiring. The chances are that you will quietly in your mind decide to only use generative AI as a last resort.

On the other hand, if you use generative AI by entering prompts in an everyday natural style then the odds are that you will feel comfortable using the AI app. The effort will essentially be felt as effortless. Converse to your heart's content. No need to bite your tongue or otherwise hold back as you write your requests. Just let it flow. The responses are going to be better and you will not expend undue energy using generative AI.

Seasoned users of generative AI have typically figured out that they can be expressive and that they do not need to hold themselves back in fluency. In fact, they often watch in rapt fascination when a newbie or someone who only occasionally uses generative AI opts to write in three-word or four-word sentences. It can be laughable. I would hope that any such seasoned user might extend a hand of helpfulness and explain to the unaware that they can type as they might normally speak.

Please adopt a pass-it-forward mantra in life, including aiding others who want to make use of generative AI.

During my workshops on prompt engineering, I often start by having attendees showcase how they have used generative AI or attempt to use the AI for the first time. Right away the shortness of prompts becomes apparent. The aim is to get everyone on board with using fluent prompts. Once we’ve broken the old habit of terse prompting, we can then move into learning the numerous techniques of prompting that can really make generative AI results shine.

I’d like to walk you through my nine steps to overcome old habits of stilted prompting. A newbie can find this quite instructive. They are being permitted to toss away the shackles of having used the constrained commanding structure of the likes of Alexa and Siri. A new sense of freedom is discovered. Seasoned users who already have gone the route of becoming fluent will potentially also benefit from considering the nine steps. We all tend to fall back into ruts, for which the nine steps can aid in keeping you out of those weeds.

Being on top of your game when it comes to the prompting and the use of generative AI is a prudent and significant approach that consists of overcoming old habits and forming suitable and useful new ones.

Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the foundations of prompt engineering and generative AI. Doing so will put us all on an even keel.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful explorations on the latest in this expanding and evolving realm, including this coverage:

Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

Overcoming Old Habits Entailing AI Interactions

Let’s take a look at a quick example to illustrate the issues associated with underutilizing generative AI by being stuck in the old ways of choppy commands and stilted interactions.

Suppose that someone named Michael wanted to go hiking in the Grand Canyon. Turns out that Michael will be accompanied by his father. The father is a bit older and has had some knee issues. The hike therefore should be on a trail that would be less taxing and accommodate a safe journey. At the same time, they both want to enjoy the breathtaking scenery and not simply a slow-paced turtle walk along the leveled rim. The aim is to find a trail that would be simultaneously suitable for the two of them and yet not be dangerous and nor be excessively tame.

Michael logged into ChatGPT and opted to use a short request to find out about hiking in the Grand Canyon. The assumption for this request is that the generative AI is akin to something like Alexa or Siri whereby your best bet is to be short and sweet on your queries or commands.

You can readily discern from the response that ChatGPT is extremely generic about the Grand Canyon and how to take hikes there. The generated content is probably on par with doing an Internet search and landing on a breezy website that touts the amazing vistas and scenery of the Grand Canyon.

Some people using generative AI would at this point opt to discontinue using the AI in this context due to assuming that the extent of available responsiveness about the Grand Canyon has been reached. In their mind, the effort to use the AI app would seem of little added value. All they are presumably going to get is vanilla-flavored responses.

You cannot especially blame the AI app for having provided a bland answer. The prompt was bland and thus the response was bland. If you want to get generative AI to be more expressive, you have to provide grist for the mill, as it were. The point is that you can open the dialogue by sharing sufficient details to lean the GenAI toward what you are trying to figure out or get produced.

Being an experienced user of generative AI, Michael realized that a more detailed and personalized prompt would be needed. There is a solid chance that ChatGPT will be able to produce a much more on-target indication about hiking at the Grand Canyon by being given clues or indications of what the backstory is and what is being considered.

The generated indication about hiking the Bright Angel Trail makes a lot of sense (for those of you who have hiked the Grand Canyon, you know there are numerous trails and that the Bright Angel Trail would be a good choice for the particular circumstance of Michael and his father).

A few subtle tailored recommendations also arise in the ChatGPT response. One is that the response suggests using walking poles. If you look closely at the depiction, the response says that using hiking poles can reduce the strain on knees. The odds are that this is not simply a generic indication (which, admittedly, could be), but more likely was mentioned as a result of the prompt that brought up the situation of the father’s knees.

The response overall by ChatGPT is pretty good and provides specific handy insights. At this juncture, someone stuck in old habits might quit the conversation because they assume that everything that could be said on the topic has now been said by the AI.

Keep ever-present in mind that generative AI is all about interaction. The best way to garner full value from using generative AI is to customarily carry on a conversation.

That is what Michael opted to do.

The conversation could have kept going.

A seasoned user of generative AI will keep the dialogue running until they believe that they have uncovered or discovered whatever remaining morsels might be of use. You don’t always have to keep probing and should be selective as to when it makes sense to get engaged in a conversation and when not to do so.

The emphasis is that you should have at the top of your mind to engage in a conversation, rather than neglecting to consider the possibility. No need to anymore do merely a one-and-done query. Those are based on old habits and were ingrained due to rightfully curtailing an agonizing and exasperating attempt at spurring stilted old-time NLP AI to come up to the task at hand.

Breaking The Old Habits These Via These Nine Steps

If you are just starting out with using generative AI, I’ve got nine easy-peasy steps that can help you overcome any prior bad habits when it comes to using AI. For those of you who are seasoned users of generative AI, take a look at the nine steps and they might helpfully remind you of how to avoid falling back into an old rut. I will showcase the nine steps and then provide a brief overall explanation of them.

Here are nine steps to overcome old bad habits and end up using GenAI soundly and smartly:

I walk people through those nine steps during my workshops on prompt engineering for generative AI. The steps plainly spell out the conception that you are probably mired in old bad habits of interacting with NLP AI. By taking the above steps, you can breathe new life into how you are using or going to use generative AI.

As stated in the steps, first you need to realize that you might be stuck in the old ways. Next, you should allow yourself to mentally think in full-bodied interactions and proceed to converse with GenAI in that mental mode. It is fine to challenge GenAI and provoke a conversation. It is fine to do a flipped interaction involving having GenAI ask you questions. Lots of techniques can prod generative AI into being more engaged and ostensibly revealing during conversations.

The big picture hope is that you will become used to carrying on full conversations with GenAI. No extra mental effort will be required to overcome those older choppy sentences. They will be gone from your normal repertoire. That being the case, this does not mean that you cannot ever use choppy sentences.

Be judicious and use sentences and wordings that vary and benefit the discussion underway.

Conclusion

I would like to clarify that I am not suggesting that generative AI can converse on par with human conversations. You will always still need to keep your guard up. GenAI can get lost during a conversation and go into tangents that you didn’t intend to invoke. The possibilities of generative AI emitting falsehoods, biases, and so-called AI hallucinations are something you need to be wary of.

There is though a sense of relief that you can avoid baby talk and almost carry on everyday conversational ins and outs with generative AI. Advances in GenAI will continue to improve this capability. The amount of fluency will increasingly be amazing and startling. Whatever you do, please don’t interpret or misinterpret the GenAI fluency to imply sentience. It is all too easy to do so. Remember at all times that you are conversing with a machine and not a human.

Another quick heads-up relates to what you enter as prompts into GenAI. Be cautious. I say this because most people don’t seem to realize that the usual licensing agreements for most generative AI apps allow the AI maker to see the entered prompts, including making use of the entered content for further data training of the GenAI (see my coverage at the link here). Do not enter material that you might consider confidential or private. Assume that whatever prompts you enter could someday be banner headlines on the front page news, which I hope doesn’t happen to you and that the odds are fortunately slim. I trust that you get my strident warning and will be guarded accordingly.

A final thought for now on this weighty matter.

William Shakespeare said this about having conversations: “Conversation should be pleasant without scurrility, witty without affectation, free without indecency, learned without conceitedness, novel without falsehood.”

Try to uphold that piece of sage advice, and maybe generative AI will do likewise.