THE AMERICA ONE NEWS
Aug 26, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Ezra Klein


NextImg:Opinion | How ChatGPT Surprised Me

I seem to be having a very different experience with GPT-5, the newest iteration of OpenAI’s flagship model, from most everyone else. The commentariat consensus is that GPT-5 is a dud, a disappointment, perhaps even evidence that artificial intelligence progress is running aground. Meanwhile, I’m over here filled with wonder and nerves. Perhaps this is what the future always feels like once we reach it: too normal to notice how strange our world has become.

The knock on GPT-5 is that it nudges the frontier of A.I. capabilities forward rather than obliterates previous limits. I’m not here to argue otherwise. OpenAI has been releasing new models at such a relentless pace — the powerful o3 model came out four months ago — that it has cannibalized the shock we might have felt if there had been nothing between the 2023 release of GPT-4 and the 2025 release of GPT-5.

But GPT-5, at least for me, has been a leap in what it feels like to use an A.I. model. It reminds me of setting up thumbprint recognition on an iPhone: You keep lifting your thumb on and off the sensor, watching a bit more of the image fill in each time, until finally, with one last touch, you have a full thumbprint. GPT-5 feels like a thumbprint.

I had early access to GPT-3, lo those many moons ago, and barely ever used it. GPT-3.5, which powered the 2022 release of ChatGPT, didn’t do much for me either. It was the dim outline of useful A.I. rather than the thing itself. GPT-4 was released in 2023, and as the model was improved in a series of confusingly named updates, I found myself using it more — and opening the Google search window much less. But something about it still felt false and gimmicky.

Then came o3, a model that would mull complex questions for longer, and I began to find startling flashes of insight or erudition when I posed questions that I could only have asked of subject-issue experts before. But it remained slow, and the “voice” of the A.I., for lack of a better term, grated on me. GPT-5 is the first A.I. system that feels like an actual assistant.

For example, I needed to find a camp for my children on two odd days, and none of the camps I had used before were open. I gave GPT-5 my kids’ info and what I needed, and it found me almost a dozen options, all of them real, one of which my children are now enrolled in. I’ve been trying to distill some thoughts about liberalism down to a research project I could actually complete, and GPT-5 led me to books and sources I doubt I would otherwise have found. I was struck one morning by a strangely patterned rash, and the A.I. figured out it was contact dermatitis from a new shirt based on the pattern of where my skin was and wasn’t affected.

It’s not that I haven’t run into hallucinations with GPT-5. I have; it invented an album by the music producer Floating Points that I truly wish existed. When I asked why it confabulated the album, it apologized and told me that “‘Floating Points + DJ-Kicks’ was a statistically plausible pairing — even though it’s false.” And like all A.I. systems, it degrades as a conversation continues or as the chain of tasks becomes more complex (although in two years, the length it can sustain a given task has gone from about five minutes to over two hours). This is the first A.I. model where I felt I could touch a world in which we have the always-on, always-helpful A.I. companion from the movie “Her.”

In some corners of the internet — I’m looking at you, Bluesky — it’s become gauche to react to A.I. with anything save dismissiveness or anger. The anger I understand, and parts of it I share. I am not comfortable with these companies becoming astonishingly rich off the entire available body of human knowledge. Yes, we all build on what came before us. No company founded today is free of debt to the inventors and innovators who preceded it. But there is something different about inhaling the existing corpus of human knowledge, algorithmically transforming it into predictive text generation and selling it back to us. (I should note that The New York Times is suing OpenAI and its partner Microsoft for copyright infringement, claims both companies have denied.)

Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built.

Then there’s the energy demand. To build the A.I. future that these companies and their investors are envisioning requires a profusion of data centers gulping down almost unimaginable quantities of electricity — by 2030, data centers alone will consume more energy than all of Japan does now.

If we had spent the last three decades pricing carbon and building the clean energy infrastructure we needed, then accommodating that growth would be straightforward. That, after all, is the point of abundant energy. It makes new technologies possible, and not just A.I.: desalination on a mass scale, lab-grown meat that could ease the pressure on both animals and land, direct air capture to begin to draw down the carbon in the atmosphere, cleaner and faster transport across both air and sea. The point of our energy policy should not be to use less energy. The point of our energy policy should be to make clean energy so — ahem — abundant that we can use much more of it and do much more with it.

But President Trump is waging a war against clean energy, gutting the Biden-era policies that were supporting the build-out of solar, wind and battery infrastructure. There’s something almost comically grim about powering something as futuristic as A.I. off something as archaic as coal or untreated methane gas. That, however, is a political choice we are making as a country. It’s not intrinsic to A.I. as a technology.

So what is intrinsic to A.I. as a technology? I’ve been following a debate between two different visions of how these next years will unfold. In their paper “A.I. as Normal Technology,” Arvind Narayanan and Sayash Kapoor, both computer scientists at Princeton, argue that the external world is going to act as “a speed limit” on what A.I. can do.

In their telling, we shouldn’t think of A.I. as heralding a new economic or social paradigm; rather, we should think of it more like electricity, which took decades to begin showing up in productivity statistics. They note that GPT-4 reportedly performs better on the bar exam than 90 percent of test takers, but it cannot come close to acting as your lawyer. The problem is not just hallucinations. The problem is that lawyers need to master “real-world skills that are far harder to measure in a standardized, computer-administered format.” For A.I.s to replace lawyers, we would need to redesign how the law works to accommodate A.I.s.

A.I. 2027” — by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland and Romeo Dean, whose backgrounds range from working at Open AI to triumphing in forecasting tournaments — takes up the opposite side of that argument. It constructs a step-by-step timeline in which humanity has lost control of its future by the end of 2027. The scenario largely hinges on a single assumption: In early 2026, A.I. becomes adept at automating A.I. research, and then becomes recursively self-improving — and self-directing — at an astonishing rate, leading to sentences like “In the last six months, a century has passed within the Agent-5 collective.”

I can’t quite find it in myself to believe in the speed with which they think A.I. can improve itself or how fully society would adopt that kind of system (though you can assess their assumptions for yourself here). I recognize that may simply reflect the limits of my imagination or knowledge. But what if it were A.I. 2035 that they were positing? Or 2045?

It seems probable that A.I. coding will accelerate past human coders sometime in the next decade, and that should speed up the pace of A.I. progress substantially. I know that the people inside A.I. companies, who are closest to these technologies, believe that. So do I doubt the entire “A.I. 2027” scenario or just its timeline? I’m still struggling with that question.

Each side makes a compelling critique of the other. The authors of “A.I. as Normal Technology” zero in on the tendency of those who fear A.I. most to conflate intelligence and power. The assumption is that human beings are smarter than chimps and ferrets, and thus we control the world and they don’t. A.I. will presumably become smarter than human beings, and so A.I. will control the world and we won’t.

But intelligence does not smoothly translate into power. Human beings, for most of our history, were just another animal. It took us eons to turn to build the technological civilization that has given us the dominion we now enjoy, and we stumbled often along the way. Some of the smartest people I know are the least effectual. Trump has altered world history, though I doubt he’d impress anyone with his SAT scores.

Even if you believe that A.I. capabilities will keep advancing — and I do, though how far and how fast I don’t pretend to know — a rapid collapse of human control does not necessarily follow. I am quite skeptical of scenarios in which A.I. attains superintelligence without making any obvious mistakes in its effort to attain power in the real world.

At the same time, a critique Scott Alexander, one of the authors of “A.I. 2027,” makes has also stuck in my head. A central argument of “A.I. as Normal Technology” is that it is hard for new technologies to diffuse through firms and bureaucracies. We are decades into the digital revolution and I still can’t easily port my health records from one doctor’s office to another’s. It makes sense to assume, as Narayanan and Kapoor do, that the same frictions will bedevil A.I.

And yet I am a bit shocked by how even the nascent A.I. tools we have are worming their way into our lives — not by being officially integrated into our schools and workplaces but by unofficially whispering in our ears. The American Medical Association found that two in three doctors are consulting with A.I. A Stack Overflow survey found that about eight in 10 programmers already use A.I. to help them code. The Federal Bar Association found that large numbers of lawyers are using generative A.I. in their work, and it was more common for them to report they were using it on their own rather than through official tools adopted by their firms. It seems probable that Trump’s “Liberation Day” tariffs were designed by consulting a chatbot.

“Because A.I. is so general, and so similar (in some ways) to humans, it’s near trivial to integrate into various workflows, the same way a lawyer might consult a paralegal or a politician might consult a staffer,” Alexander writes in his critique. “It’s not yet a full replacement for these lower-level professionals. But it’s close enough that it appears to be the fastest-spreading technology ever.”

What it means to use or consult A.I. varies case to case. Google let its search product degrade dramatically over the years, and A.I. is often substituting where search would have once sufficed. That is good for A.I. companies, but not a significant change to how civilization functions.

But search is A.I.’s gateway drug. After you begin using it to find basic information, you begin relying on it for more complex queries and advice. Search was flexible in the paths it could take you down, but A.I. is flexible in the roles it can play for you: It can be an adviser, a therapist, a friend, a coach, a doctor, a personal trainer, a lover, a tutor. In some cases, that’s leading to tragic results. But even if the suicides linked to A.I. use are rare, the narcissism and self-puffery the systems encourage will be widespread. Almost every day I get emails from people who have let A.I. talk them into the idea that they have solved quantum mechanics or breached some previously unknown limit of human knowledge.

In the “A.I. 2027” scenario, the authors imagine that being deprived of access to the A.I. systems of the future will feel to some users “as disabling as having to work without a laptop plus being abandoned by your best friend.” I think that’s basically right. I think it’s truer for more people already than we’d like to think. Part of the backlash to GPT-5 came because OpenAI tried to tone down the sycophancy of its responses, and people who’d grown attached to the previous model’s support revolted.

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me.

I don’t know whether A.I. will look, in the economic statistics of the next 10 years, more like the invention of the internet, the invention of electricity or something else entirely. I hope to see A.I. systems driving forward drug discovery and scientific research, but I am not yet certain they will. But I’m taken aback at how quickly we have begun to treat its presence in our lives as normal. I would not have believed in 2020 what GPT-5 would be able to do in 2025. I would not have believed how many people would be using it, nor how attached millions of them would be to it.

But we’re already treating it as borderline banal — and so GPT-5 is just another update to a chatbot that has gone, in a few years, from barely speaking English to being able to intelligibly converse in virtually any imaginable voice about virtually anything a human being might want to talk about at a level that already exceeds that of most human beings. In the past few years, A.I. systems have developed the capacity to control computers on their own — using digital tools autonomously and effectively — and the length and complexity of the tasks they can carry out is rising exponentially.

I find myself thinking a lot about the end of the movie “Her,” in which the A.I.s decide they’re bored of talking to human beings and ascend into a purely digital realm, leaving their onetime masters bereft. It was a neat resolution to the plot, but it dodged the central questions raised by the film — and now in our lives.

What if we come to love and depend on the A.I.s — if we prefer them, in many cases, to our fellow humans — and then they don’t leave?

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.