Ongoing reports and analysis



Donald Trump “really gets it.” This is according to no less an authority than Sam Altman, OpenAI’s CEO, who was commenting on the U.S. president’s understanding of AI and ways to maximize its potential.
Surely, the fact that Trump blocked time on his very first full day in office to line Altman up along with other tech luminaries in the White House’s Roosevelt Room to announce a half-a-trillion-dollar AI infrastructure project tells us that Trump gets AI’s national significance.
What about the president’s executive order facilitating AI’s energy needs by expanding America’s nuclear energy capacity, followed up by a presidential “AI and Energy Summit”? Don’t these actions show Trump gets which levers the U.S. government must pull to power the rapid build-out of AI infrastructure?
Or what about scuttling former President Joe Biden’s executive order calling for guardrails on AI in order to make way for new executive orders greenlighting acceleration of American AI? What does this show if not Trump understanding the urgency to advance AI innovation?
It is not all about greasing the skids on the supply side—the Trump administration gets the demand-side needs as well. Domestically, there is focus on “high impact” use cases, with recent government directives on the acquisition of AI and its use for federal government purposes. In his inaugural overseas trip in his second term, Trump traveled bearing AI gifts, resulting in a series of deals giving the United Arab Emirates and Saudi Arabia access to highly sought-after high-performance AI chips. It seems he already imagines a new world order, leapfrogging past relics such as NATO or the G-7 or development aid programs, where exclusive access to buyers of American AI is the new raison d’être for U.S. influence around the world.
Domestically, even Trump’s signature One Big Beautiful Bill, which had nothing to do with AI, made room for an AI acceleration provision tucked within it. The provision would have blocked U.S. states from attempting to regulate AI for a decade. The Senate voted 99-1 to kill the provision. At the very least, Trump can say: Hey, I tried.
Putting aside the honor he so covets of one day becoming a Nobel laureate, has Trump already sealed his legacy as the “AI president”? Is he to AI what JFK was to putting a man on the moon?
In answering this question, we must analyze the wisdom of Trump’s central contribution, the through line of his AI policies: to do away with even a superficial interest in the government regulating or establishing any guardrails on AI development and use. And to appreciate the nuanced relationship between regulation and innovation, consider the curious development following the Senate’s 99-1 vote to let states develop their regulatory frameworks. Many giants of Big Tech—Amazon, Google, and Microsoft—are now pushing for federal regulations; some are open to a mix of federal and state regulations, while others prefer that federal regulations preempt any state ones that might emerge. This might strike the casual observer as a bit odd since the presumptive AI president is offering a green light, and yet the companies are asking for an amber signal. Clearly, Big Tech knows a lot about the technology. Is there something about AI that gives lie to the Trump “really gets it” claim?
Of course, the tech companies prefer not to have to deal with multiple state-driven regulations that would add to their compliance costs and time, but there is more to this plea for federal regulation from the AI producers. Contrary to the belief that regulation only serves to hobble the revolutionary advances of new technologies, history has shown that public intervention is essential to private innovation. In fact, this is the moment in AI’s development trajectory where government guardrails are indispensable, and here are some of the reasons why.
Easing adoption: Regulation serves many objectives, but one of its central benefits would be a common template for AI developers to demonstrate that they have addressed the numerous concerns that have led to rising user distrust in AI. According to a YouGov survey, a majority of Americans, regardless of political affiliation, are concerned about the technology, especially about misinformation, privacy, and ethical issues and bias—and as a result, they would prefer some degree of regulation. Distrust delays adoption, which in turn diminishes the return on the stratospheric investments that AI developers are making. Indeed, employee distrust is holding back AI implementation across the U.S. workforce; even Keith Sonderling, Trump’s own deputy labor secretary, admitted to it at a recent Business Roundtable event.
Lending clarity and standards: Beyond enforcing trust-building attributes, regulations can establish common requirements for data quality, model documentation, and risk assessment. Such requirements—while seeming prosaic and bureaucratic—bring a degree of clarity that benefits all participants in a rapidly emerging industry. Standardized disclosures reduce compliance costs; known rules of play give competitors the incentives to strategize while investing in the minimum requirements for safe and responsible AI development; and guardrails that avoid the most serious of risks can create a safer environment for experimentation, which is so essential for innovation. Standards also increase interoperability, which in earlier waves of digital technologies has vastly expanded the opportunities for innovation and new applications.
In general, in the absence of clear guidelines, well-understood governance, and accountability frameworks and known rules of play, companies have few benchmarks for how much to commit in terms of resources dedicated to different aspects of AI development. Consequently, they run the risk of both under- and over-investing. Without a standardized approach, it is hard to report on the returns on investments or the risks by referring to commonly accepted metrics, which, in turn, leads to inefficiencies in capital allocation.
Catalyzing competitive innovation: Well-designed regulation can ensure fair and legal access to critical resources, such as data, compute, and infrastructure. Uniformly applicable rules help competitors of all sizes participate without having to rely on extensive legal and administrative expenditures. A nationwide standard or a core framework is helpful and preferable to multiple regulations developed in different states. The latter, in turn, is preferable to having no regulations at all and ought to be harmonized wherever feasible. This creates opportunities for smaller players to enter and compete alongside large incumbents. Establishing data intellectual property rights and anti-monopoly regulations not only creates a more level and innovative playing field but also enhances a technology that needs diverse perspectives and a breadth of applications from a breadth of providers.
Establishing global competitiveness: Beyond U.S. borders, American AI models developed without guardrails could end up at a competitive disadvantage if they develop a reputation for being insufficiently attentive to trust-building features. Moreover, regulators in jurisdictions such as the European Union—as well as international companies, users, and investors—may require additional assurances that AI risks have been addressed before they pay for the products and use them. Having regulations preemptively in place helps American companies align with minimum standards and not have to guess about a backlash down the road or a new, harsher set of regulations established later because of a crisis.
The choice, ultimately, is not between regulation and innovation, but between smart regulation that facilitates beneficial innovation, competition, and productive adoption and regulatory failures that undermine all of these. Far from being an enemy of innovation, regulation is an essential partner.
History offers numerous examples of missing this connection and the consequences, from drugs, toys, electronics, baby food, or cosmetics. There are examples from all of these categories of products released without adequate regulation leading to everything from public health crises to product recalls. Consider the 2008 financial crash, triggered by unregulated financial products leading to a domino of systemic disasters. The Trump AI team would benefit from a study of the experiences of new tobacco products, such as electronic nicotine delivery systems and nicotine pouches. These were rolled out with minimal regulatory controls and caused a backlash in key markets such as Indonesia. Then there’s failing to harmonize regulations across major jurisdictions such as the United States and Europe, contributing to a loss of public confidence in genetically modified organisms.
Trump clearly gets the “move fast and break things” mantra of tech, as he seems to have applied it to all other aspects of his governance. But that mantra has not aged well; it was the wrong one for earlier waves of tech innovation and is certainly wrong for AI.
This post is part of FP’s ongoing coverage of the Trump administration. Follow along here.