THE AMERICA ONE NEWS
May 30, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Aubrey Gulick


NextImg:Could We Endow AI With Reason? Meta and OpenAI Think So.

It seems that most people have started to tune out the latest developments in artificial intelligence. That’s probably because, while many of those developments feel as though they belong in the prologue of an epic sci-fi novel, computer scientists and tech writers are rarely as good as H.G. Wells and Ray Bradbury at telling a story.

So it’s no surprise that announcements this week by Meta and Microsoft have been eclipsed (pun certainly intended) by headlines about abortion, Donald Trump, and O.J. Simpson. (READ MORE: On Abortion, Trump Has Gravely Erred)

Meta announced that, in the coming weeks, it will be rolling out an AI model called Llama 3. The goal isn’t just to create a machine that can read millions of transcripts and New York Times articles. “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan … to have memory,” Joelle Pineau, Meta’s vice-president of AI research, said.

Microsoft’s OpenAI isn’t far behind. Executives there have indicated that GPT-5 is just around the corner. The goal — even if the next rendition of ChatGPT doesn’t accomplish it — is to create a model that “reasons.”

AI Models That Induce, Deduce, and Decide

Of course, computer scientists don’t mean the quite same thing as Plato or Thomas Aquinas did when they identified man’s reason as being the distinguishing factor between man and beast. But their definition of reason isn’t all that far off.

When researchers at OpenAI and Meta talk about computer programs that “reason,” they’re referring to the program’s ability to “think” ahead, to come up with possible predictions for the future, and to model the consequences of actions taken. (READ MORE: Fact-Checking AI: Are Republicans Racist?)

In simpler terms, they want to create an AI model that can plan your next trip to Paris or present the president of the United States with a series of possible outcomes if he chooses to launch a nuclear weapon at Russia. Meta wants to be able to hand you a pair of Ray-Ban glasses powered by Llama 3 that could diagnose a broken coffee machine and explain to the wearer how to fix it (even handymen are now at risk of being replaced by AI).

The ultimate goal is to create artificial general intelligence (AGI) — an AI model capable of human cognition or beyond (think C-3PO from Star Wars but without the lovable congenial personality traits). To create that kind of model, AI has to be capable of deductive reasoning (going from general principles to specific applications), inductive reasoning (going from specific applications to general principles), and decision-making based on the data available to it.

That should sound a lot like what humans do.

Right now, artificial intelligence is incapable of coming up with an idea. While it can generate text, images, and videos, it’s never truly creative. The creator is the person with his or her fingers on the keyboard coming up with prompts and accepting or rejecting the results being spewed out by the machine. By building an AI model that can induce and deduce just as well as we can, we’re essentially creating one that can decide to create a prompt and generate a response — in other words, an AI model capable of creation.

Computer Scientists Are Sub-Creators Too

At this point, you’re probably wondering what vendetta computer scientists have against the world. We’ve all seen The Matrix. We don’t need to recreate it in real life.

But perhaps it’s not a vendetta. Maybe this is just human nature.

In March of 1939, J.R.R. Tolkien gave the Andrew Lang Lecture at the University of St. Andrews. The resulting essay, “On Fairy-Stories,” is perhaps one of the best explanations we have of the mentality and methods of one of the greatest fantasy writers of our time. In that essay, Tolkien answers the question: Why do men tell stories?

His answer is that man is a “sub-creator.” Because man was created in the image and likeness of God, man shares with God a desire for creation. Mythology, fairy tales, novels, and ghost stories told by the campfire on late summer nights are a fulfillment of man’s desire to be a sub-creator in some capacity — to build a world.

Computer scientists and executives at OpenAI and Meta don’t necessarily have a way with words; instead, they have a way with algorithms and code. That proclivity doesn’t absolve them of the very human desire to be a sub-creator. Whether or not they realize it (and some of them may), creating artificial intelligence is simply an attempt to scratch the creative itch endowed in each of us.

The attempt to create an artificial intelligence that rivals human intelligence may be an outpouring of our identity as sub-creators, but that doesn’t make it always a healthy thing. Tolkien writes, “Mythology is not a disease at all, though it may like all human things become diseased.” The same is true of artificial intelligence. If we can write bad stories (morally speaking), we can create bad AI models.

As a society, we’re going to need to identify what “diseased” AI looks like (which is not to say that AI capable of induction and deduction is diseased). Developments like this newest one should remind us that, as a society, we’re fast approaching a rather scary deadline: At some point we will no longer be able to identify diseased AI proactively; we’ll have to do it retroactively.