THE AMERICA ONE NEWS
Aug 12, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic


NextImg:Trump’s AI Action Plan Is at War With Itself

View Comments ()

U.S. President Donald Trump’s “America’s AI Action Plan” breaks from AI policies of the past. The goal is AI acceleration by any means possible to beat America’s nemesis: China. The White House claims that the plan was immediately hailed across the technology industry. If so, the industry needs to take a closer look. Considered in the context of the Trump administration’s broader policy mix, the AI Action Plan represents a strategy at war with itself. The plan’s ends are unclear. With deregulation, tariffs, and immigration restrictions further undermining the means to those unclear ends, the plan promises to force on the world a bundled American AI stack—a bundle developed guardrails-free and politicized in a way that, oddly, mimics China’s state-driven controls. To add to the noise, Trump last week announced a staggering 100 percent tariff on a critical component of AI—semiconductors—unless the companies producing them are “building in the U.S.,” but he hasn’t clarified the extent of “building” required to escape the levy and whether it is the semiconductor or the end product containing it that will be taxed. Even the plan’s laudable features, like accelerating AI innovation and infrastructure and promoting open models, are systematically undermined by other features.

Consider, first, the plan’s stated intention. Trump wants to ensure that children aren’t condemned to a planet controlled by “algorithms of the adversaries.” As our experience with social media has demonstrated, American children need not fear the algorithms of adversaries; there is no dearth of homegrown algorithm-makers in the United States capable of contributing to body image crises, cyberbullying, online predation, misinformation, and an epidemic of teenage depression and anxiety. Unlike a race to complete the world’s tallest building, or more complex rationales such as mutually assured destruction in a nuclear standoff, the goals of an “AI race” are unclear. AI has applications in national security, no doubt, which provides a solid rationale for national competitiveness, but its widest and most profitable uses will be in numerous other areas that benefit from varied innovations, talent, and datasets crossing borders.

In order to “lead the world” in AI, the plan assures freedom from federal regulations. To ensure that American AI developers are also freed from pesky restraints from the states, AI-related federal funding would be steered away from states with AI regulations deemed burdensome.

Setting up regulation and innovation as antithetical presents a false choice. Regulations steer AI developers to act on the numerous concerns—such as worries about misinformation, privacy, and ethical issues and bias, among others—that cause distrust and delay adoption of AI. This lowers compliance costs and creates incentives for developers to invest in minimum safeguards. Standards also expand the opportunities for multiple actors to innovate in parallel and reach new markets. Regulations can also get industry players to adhere to common requirements for data quality, model documentation, risk assessment, and standards for disclosure. The real innovation killers, however, lurk within Trump’s policies beyond his AI Action Plan.


Trump’s chaotic tariff regime hits AI start-ups disproportionately. Start-ups rely on specialized hardware imports and lack the cash buffers of larger companies to absorb higher costs. Rising costs and uncertainties make venture capital more cautious in its lending, while increasing restrictions on foreign students and attacks on immigration choke the AI talent pipeline—77 percent of the top U.S. AI companies have been (co-)founded by first- or second-generation immigrants.

A key enabler to advancing AI is expanding AI infrastructure in the U.S., a critical pillar of the plan. But with the new tariffs that might be levied in parallel on Mexican, Canadian, and Chinese goods after the current reprieves on these nations are gone, construction costs are guaranteed to increase. Construction materials for building data centers, cooling systems, transformers, backup generators, steel, aluminum, and fiber-optic cables are all going to be hit, causing data center construction costs in the U.S. to spike by 15 percent to 20 percent and leading to companies looking to seek infrastructure in cheaper locations abroad.

As for critical AI components, consider Nvidia’s graphics processing units (GPUs). Fabricated in Taiwan with components from South Korea, and with raw materials and packaging materials from Taiwan and China—all likely to be hit by at least 20 percent tariffs—the GPUs will now cost more.

On the plus side, the plan does promise to elevate data centers to a status reserved for infrastructure with national security significance. This qualifies them for expedited permitting and streamlined environmental reviews. Federal lands could be opened up for their construction. And the national security classification also give the Department of Defense priority access to computing resources during national emergencies to utilize cutting-edge AI in critical times. Still, while well-intended, these measures contain many risks. For example, the use of federal lands for oil and natural gas explorations or the declarations of emergencies on the U.S.-Mexico border have been controversial. The U.S. is already not doing well in terms of balancing the environmental costs of its expanding digital economy.

As for the AI product itself, the plan offers a novel proposal—to secure American AI as the global standard by exporting complete U.S.-made AI technology stacks. The government will invite company consortia (e.g., Nvidia, AMD, Oracle, OpenAI) to assemble a package bundling hardware, data systems, AI models, cybersecurity measures, and sector-specific applications, and support it with federal financing tools.

Bundling carries many disadvantages. Buyers might prefer customizing different elements of the stack, possibly also incorporating local data that U.S. providers may lack. In general, buyers resist being locked in—unless the pricing is sufficiently attractive and they trust the bundler.

This trust is, however, at risk. Some buyers would be concerned about AI developed without guardrails and controlled by the largest U.S. tech companies operating without oversight and controls. And there will be worries about the AI containing biases. Federal agencies intend to only procure frontier AI models that are certified to be “truth-seeking” and “ideologically neutral,” specifically steering away from “ideological dogmas such as DEI.” In promoting ideologically unbiased AI, the administration is encouraging bias against AI or data it deems too “woke.”

Meanwhile, the relaxation of environmental restrictions could alienate the U.S. in international fora; hinder U.S. company participation in green-energy alliances; and potentially put AI suppliers at a disadvantage where the environmental responsibility is considered critical for procurement.

While these moves in a global marketplace are meant to “win” a race against China, the self-contradictory mix of U.S. policies may already be stimulating Chinese AI. For one, AI pursuits in China are less about abstract goals, such as artificial general intelligence, and more practically oriented on economic and industrial applications. This already helps to make Chinese products appealing to users looking for the technology to solve immediate problems. The launch of DeepSeek-R1 earlier this year has already shown that the Chinese model of doing more with less can work; Chinese companies have been adept at circumventing U.S. export controls through software and modeling innovations in the absence of components at the cutting edge. For these reasons, China has been able to market its AI tools to countries around the world as more affordable and accessible alternatives to the American product.

This is not to suggest that U.S. technology isn’t valuable. In fact, even though Chinese firms may prefer American components such as Nvidia chips to their homegrown ones, China has leverage to create carve-outs in U.S. export controls. China dominates rare-earth elements, including gallium and germanium, essential for semiconductor production; it has already used this leverage to get Nvidia’s CEO, Jensen Huang, to convince the Trump administration to ease export restrictions to China for Nvidia’s H20 AI chips.

The self-contradictory nature of the AI Action Plan is so pervasive that it undermines even its most positive of features: to drive adoption of open models. This is intended to help smaller AI developers and users as well as researchers by democratizing access to cutting-edge AI tools. Open models generally come with some risks, from security issues to potential lack of controls and governance systems. In a guardrails-free environment, these risks would be magnified. Moreover, an open-model approach to AI seems antithetical to the intent to foist a bundled American AI stack on the world.


The plan’s contradictions should prompt organizations to consider their own action plans to safeguard their interests.

For the American AI stack to be successful in a global marketplace, it must outdo the alternatives not only in terms of performance but also in offering trust-building attributes and competitive pricing. Given the trade uncertainties, AI developers should manage their cost structure aggressively by evaluating the returns on data center investments under alternative tariff regimes and deferring non-critical commitments in the highest-risk scenarios.

AI developers and adopters can experiment with model compression and sparsity techniques to lower compute need and reliance on premium GPUs to bring costs down overall. But there is much more they must do to fill the policy vacuum. AI developers should consider voluntarily adopting trust-building frameworks, such as the Business Roundtable’s Responsible AI Roadmap or the NIST Risk Management Framework, or collaboratively committing to safer AI by commissioning third-party red-team tests. AI adopters should insist on such measures to collectively set industry standards.

Parties across the AI ecosystem should continue to engage with U.S. policymakers to ensure the latter are aware that AI advancement is dependent on the broader policy mix ranging from trade to immigration and not on the AI Action Plan itself. Organizations should develop strategies to keep the global talent pipeline flowing and even considering international locations that are talent hotspots. AI developers, adopters, and policymakers should engage with stakeholders in key non-U.S. regions and countries to set convergent international AI norms. In the absence of convergence, they should prepare to comply with multiple regulations—from the U.S. “bias-free” specifications to the EU-mandated risk tiers.

Finally, all organizations should embrace the opportunity of open models promoted by the plan to produce cost savings, innovation, and transparency. Transparency helps with audits of AI outputs and supporting compliance, especially in heavily regulated industries. However, organizations must educate themselves about the accompanying security risks and take steps to manage them.

AI developers and adopters need their own action plans to make good use of America’s AI Action Plan’s opportunities, while hedging against the wider risks. Adaptive AI systems, resilient supply chains, and self-imposed guardrails provide the best way to capitalize on the plan’s upside without being blindsided by its many contradictions. The AI Action Plan is a policy unicorn as in it is not just a break from the past; with Trump talking big on American AI while torpedoing it, the plan even manages to break from itself.

This post is part of FP’s ongoing coverage of the Trump administration. Follow along here.