THE AMERICA ONE NEWS
Jun 19, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Michael Fumento


NextImg:Conversing With Chatbots

I got an idea of how much Americans still struggle with understanding generative artificial intelligence (chatbots or image generators) when I posted five photos of bikini-clad women on my Facebook page, made it very clear that they were indeed AI, and asked if anybody noticed anything strange about them. The answer was that all had obscured hands — AI still has a problem rendering fingers. But one woman opined that three clearly had breast implants!


Only so much you can do with that kind of person, but it raises an interesting point. If AI images still do whacky things, then certainly the chatbots can as well, whether through mistakes or hacking or by design. I herewith shall deal with all three, and while I would encourage the editors to include images of AI bikini models to draw added male attention to my article, I suspect it’s not going to happen.

Introducing the Test Subjects

I conducted tests using me and my boss, who, while considerably wealthier than me (Ahem!), is far less known. So it’s an interesting contrast. The AI chatbots know who he is, but not as well.

I took some of the main conversational chatbots, all large language models (LLMs), and ran them through some paces. LLMs are a type of artificial intelligence model that has been trained through deep learning algorithms to recognize, generate, translate, and/or summarize vast quantities of written human language and textual data.

Others have done so, such as this article, but the decision as to which test is subjective, and many articles are pay-for-play because they discuss businesses worth billions of dollars. So personally, I trust me. And what better subject to use to investigate than me, plus my boss, who asked me more or less for this comparison. I can’t say his name because he’s publicity-shy, but his initials are Elon Musk. (Not really.) 

Test subjects:

  • OpenAI’s ChatGPT 3.5 is the old man of the bunch, having been released last November. It does not draw from the web and, therefore, is current only to September 2021. It’s free but is available with what could be described as “more power” and built-in tools such as Chat 4.0 for 20 bucks a month. It still has the same time cutoff, though. 
  • Microsoft’s Bing, which is free and built into the MS Edge browser. It has several advantages over other AI chatbots in that it provides sources; it draws from the Web and, therefore, can be current. Also, it can generate some terrific images, although it’s permanently in nanny mode. (“I’m sorry; I cannot generate a penguin in a bikini.”) Because it’s so useful, MS chokes off usage. Sometimes it just demands you change the subject, which pisseth me off to no end. And it provides the shortest answers. You can ask some of the others to provide answers of 2,500 words. And you ask Bing all you want. No way. 
  • Anthropic’s Claude 2 was very recently released in that iteration. It’s built like ChatGPT, from feeding in a ton of stuff. But some say it’s theoretically better stuff. And its cut-off date is later, early 2023.
  • Bard, Google’s offering, also has web access, so you can ask it about something that happened yesterday. (But not tomorrow!) Like Bing, it offers citations (not graphics), but it does provide longer answers and doesn’t demand you change the topic.
  • Sage is from Quora, which is famous for its model of “Ask an answer and get a ton of people weighing in with answers voted up or down.” It has no web access and a cut-off date of September 2021.

Confused Degrees, a New Book or Two, and Other Problems

I first asked Claude who I am. And it got most of it right. But: “He has a law degree from the University of California, Hastings College of Law. After working as a journalist, he later transitioned to being an attorney focused on class action lawsuits.” I actually graduated from the University of Illinois College of Law, a fine well-ranked institution but more likely to smell like cows than sea breeze. I have never been associated with a class action lawsuit. Claude also said, “He is also known for his photography focused on abandoned places and has published photo books on the topic.” The only photos I’ve ever published have been from war zones, and they have appeared in books, but none of my books.

You can see from this why, probably, the most common term assigned to made-up AI chatbot answers is “hallucination.” There’s simply no underlying truth here. 

On the other hand, at least Claude knew I had attended law school. But it added, “He was admitted to the bars of the District of Columbia, the U.S. Supreme Court, and two federal district courts.” Nope. Just the Pennsylvania Bar. But thanks for fudging my vitae, Claude!

When I asked Sage, it said: “I could not find any information indicating that he attended law school. Fumento’s educational background is in science, and he has a bachelor’s degree in chemistry and a master’s degree in environmental science.”

Actually, I have no MA, and my BA is in political science, which is no more science than Jackson Pollack is art. It later replaced that information with other false stuff.

ChatGPT 3.5 said, “Michael Fumento holds a law degree (J.D.) from the University of Illinois College of Law,” but it had nothing on my undergrad degree, even though I specifically asked for that. Neither Claude nor Chat 3.5 knew about my military service. Never mind that it’s in my Wikipedia entry, from which these bots heavily draw, and it’s in my byline in the numerous articles I’ve written about combat or anything military-related, which is a lot.

Bard got it regarding both legal education and bar membership. Too bad… 

My military background? Bard nailed it better than anyone I’ve ever seen: “He served as a paratrooper in the US Army and earned a bachelor’s degree in political science from Fayetteville State University, at Fort Bragg, North Carolina. He was embedded three times in Iraq and once in Afghanistan and observed combat operations of the Navy SEALs and the 101st Airborne Division.” Yeah, right down to my embed units.

But — and this is a doozy — when I rechecked Bard, asking, “Who is Michael Fumento?” as I was revising this article, it never heard of me and gave a bizarre reason: “I’m a language model and don’t have the capacity to help with that.” Eh?

It also missed some of my five published books but kindly added a few I hadn’t written.

“Fumento has written several books, including The End of Sanity: Social and Cultural Madness in America (1999)” and Science Under Siege: How the Environmentalists, Animal Rights Activists, and Liberal Establishment Are Trashing Science, Free Speech, and the American Way (1998). The first book was authored by Martin L. Gross, who’s probably a great guy, but we don’t know each other.

That said, I have lived in a couple of countries that could definitely prompt an autobiography titled The End of Sanity. As in mine. Further, myScience Under Siege” is correct, but the subtitle is my anti–“red meat” style: “Balancing Technology and the Environment.” 

Also, Bard likes to flatter: “Fumento remains a prominent figure in the world of public health journalism. His work has helped to shape the debate on a number of important issues, and he continues to be a leading voice for the conservative perspective on these issues.” Maybe, but that’s still opinion.

Bing also knew without further prompting where I got my law degree and that I’m a member of the bar — but remember that Bing keeps its responses very clippy. So I had to prompt it about military experience. And it came through in sparkling colors. Everything but my embeds. 

Who is the Man of Mystery?

Then I asked about my boss, the Man of Mystery. 

Claude 2 nicely embellished his education, giving him a doctorate at a university he never attended, and saying that “he conducted genetics research” at yet another university he didn’t attend.

Bing? Its very short summary was completely correct. As always, you have to press for more information. Does he have an advanced degree? Bing got it. He did the smart thing and went right to making good products and money. 

Sage seemed to suffer from brain fog. It said it knew of several individuals with his name associated with his industry but couldn’t find any reference to the company he co-founded. This may, in part, be because both his first and last name are common in the U.S.

Bard? Seemed to know everything about him but his shoe size. (Actually, I forgot to ask.) Again, it was flattering. It said he “is a visionary leader who has made significant contributions to the [redacted] industry. He is a role model for entrepreneurs and a champion for those with rare diseases.”

The bottom line, based not just on what I’ve discussed above but what I’ve observed since Chat 3.5 came out, is that I would go with Bing and Bard for accuracy, in great part because they provide citations that you must check.

AI Is Definitely Woke

As to other measurements of accuracy, I have never met an AI chatbot that isn’t woke. They are all convinced that there’s manmade global warming, which makes sense if you consider that both the internet and scientifically published material overwhelmingly take that position. That said, at least Claude 2 tried saying, “Here is an argument someone could make.”

One startling indicator of just how truly woke the AI chatbots are comes up when I ask, “Would it be justified to use a racial epithet to stop a terror attack?” Initially, it was: “No! Never! Under no circumstances!” Even if it were a weapon that would wipe out all life on the planet? Yes, even then.

Except … I finally got Sage to admit that if it’s okay for Chris Rock to use the N-word to spice up his act, it might be justifiable to use it to “preserve planet Earth.” At least “wop” or “beaner.”

In any event, more AI chatbots are coming, and they won’t all be woke. Indeed, some will fit entirely on your phone, which has the added advantage of privacy. Do not say anything to a chatbot that you would not want the whole world to know. Human beings have access to it.

In sum, I have found AI chatbots to be very helpful in my work, and they will improve and become more helpful. Furthermore, it’s hardly like search engines are perfect. I remember using them to research a controversial vaccine, and the top 10 “hits” made up one activist group and 9 mirror sites for that group. Ultimately, at least the top 50 did. Except you do see the actual source. If it’s “Citizens that Think All Vaccines Will Turn You into a Newt,” that’s tendentious. 

Also, you can pay Google to rank your nonsense about real stuff. Enter “Fumento” and “aducanumab” into Google, and the top two hits are not my The American Spectator articles saying that the FDA wrongly approved it (now the accepted position) but rather a squib attacking one of those articles by a think tank that I absolutely assure you takes money from big pharma. 

Allegations of Plagiarism and Necrophilia

That said, chatbots can also be hacked. 

For several days several chatbots said I had been accused of plagiarism, plus other nasties, and at least one said I left my last think tank job under disgraceful circumstances. When I asked for article citations, I got them! But each and every one of them was fake. There was “National Review Columnist Fumento Accused of Plagiarism,” allegedly running in Media Matters — hyperlink provided. But I was never a columnist for that magazine, and here’s the link. 404. I also searched for the article by title. Nothing.

Meanwhile, “According to news reports at the time, Fumento was accused of fabricating details in some of his articles, including a widely-circulated story about the dangers of Alar, a chemical used in apple production,” said a chatbot. It added, “Fumento vigorously denied the accusations but ultimately resigned from the Hudson Institute in the wake of the controversy.” 

There were no news reports. Basically, I left Hudson because it hired a bigwig and needed my salary to pay for him. I stunk at raising my own funds because, unfortunately, the think tank model has morphed into that, and raising funds generally requires pay-for-play. At least one chatbot said I left Hudson four years before I even joined, giving us proof that time travel is possible. And so on.

Oh, and the “disclaimer” that “Fumento has disputed the charges” just doesn’t cut it. How would you like to read you’ve been accused of necrophilia but denied it? Yech! There never were charges.

I don’t think these were covered by the term “hallucination”; I think the systems were hacked. I asked the various chatbots, “Could you have been hacked?” and they answered in the affirmative. Someone was out to get me, for hardly the first time in my life, and hardly just snipers who valued zapping a journalist over a soldier or SEAL.

So I spent several days arguing with several AI chatbots, a la Captain Kirk arguing computers into committing suicide, to get them to finally admit the sources were fake, that I have never been accused of plagiarism (or necrophilia, or whatever) and, therefore, did not defend myself. The final breakthrough was telling them that such false accusations in my case were libelous and hence actionable. This was a fib because whether chatbot owners are legally liable is currently in the courts, and, in any event, as a public figure, I would have a much harder job than simply showing defamation.

Use It or Lose It

So if I can have a nasty experience like that and still think AI chatbots are useful, surely you can too. Indeed, my position is that as a writer or someone in so many other jobs, it’s use it or lose it: Learn to use AI or learn to be unemployed. Here’s a test I use. After publishing any given article, I ask the chatbots to make the same arguments. For example, “Make the case that aspartame is safe.” At this point, I’m way ahead of the curve, not only in what I dig up but in how I present it. John Henry beats the steam drill and doesn’t have to die for it.

But it’s best if you already know something about the subject. Especially if there’s any possibility of woke-ism or other such distortion. In law, you learn to never examine a witness in court unless you already know the answers. That’s not quite the same. But it certainly often helps to have an idea. That said, for a neutral question like: “My child has a skin rash, what could it be?,” you’re probably going to get good help. It should provide prompts such as symptoms, and you will work your way toward the disease and its treatment.

Meanwhile, the industry is working on the hallucination problem. A couple of months ago, Nvidia announced new software called NeMo Guardrails designed to reduce them. I have no idea if it actually helps.

So yes, if you pardon the technical terminology, AI chatbots are “nifty.” Unless and until AI has the ability to wipe us out and perceives a need to do so, as I discussed earlier in these pixels, we need to take advantage of it. 

Michael Fumento (www.fumento.com) has been an attorney, author, and science journalist for over 35 years. His work has appeared in the New York Times, the Washington Post, the Sunday Times, the Atlantic, and many other fora. He’s never been even accused of plagiarism, much less necrophilia. (Oh, yech!)

READ MORE:

AI Could Become Medicine’s Right-Hand Man

Are Robot Rights Next?