


The week of September 23, 2024: Tariffs, the debt, federal mandates, and much, much more.
The first sentence on an X post by GZero Media was a quote from Brad Smith, Microsoft’s vice chairman and president:
“Good AI guardrails will ensure the tech is used safely, securely, and in a way that reflects the values we want society to embody.”
Who are “we”? Who defines “society”? What are the “values” that the mysterious “we” want to see embodied?
None of these questions were answered (or asked) in the minute or so of discussion shown (which was a little off topic anyway).
Scroll a little further down the GZero feed (it’s worth taking a look, not least as a window into the evolution of, loosely speaking, Davos progressivism) to find another post in which it is noted that “the UN just launched the first truly global approach to AI governance,” words that should chill anyone who knows anything about innovation or, for that matter, the U.N.
The author of the post continues:
At the #UNGA79 Summit of the Future, [Ian Bremner] asks [Carme Artigas], the co-chair of the UN’s high-level advisory panel on AI, why is the UN uniquely positioned to lead on this?
Ian Bremner is the president and founder of Eurasia Group, a leading political risk consultancy (Eurasia owns GZero) and one of the members of this “high-level advisory panel.”
And Artigas? Turn to Wikipedia to discover that, among other activities, she:
[S]erved as Spain’s Secretary of State for Digitalization and Artificial Intelligence, as well as president of the National Cybersecurity Institute and the Spanish Agency for the Supervision of Artificial Intelligence. Artigas led and coordinated the negotiations for the approval of an Artificial Intelligence Act to regulate AI at European level.
It wasn’t hard to guess where this was going. Sure enough, Artigas talked about the U.N.’s legitimacy (no comment) and its record in such areas as (check notes) climate policy, which apparently makes it well-placed to consider potential harms from AI such a as lack of “inclusiveness.”
An article written by economist John Cochrane is a good antidote to any gloom generated by reading about the U.N.’s efforts in this area. Cochrane’s paper, entitled AI, Society, and Democracy: Just Relax, is a calm counterpoint to the regulators, wannabe regulators, “stakeholders,” rentseekers, and hysterics drawn to AI, to borrow Orwell’s memorable phrase, like “bluebottles to a dead cat.” Their motives vary but include greed (consultants, lawyers, and the like), quasi-superstitious dread (Frankenstein and all that), and the pursuit of power, with the latter, as so often, dressed up in collectivist clothing.
Cochrane’s article is too long to do it justice even in the Capital Letter, which is not known for its brevity. So, I’ll just consider a few of the topics he raises and encourage those interested in this subject to read the whole thing.
As Cochrane’s title would suggest, his basic theme is that people should calm down, a theme emphasized in some early lines:
“AI poses a threat to democracy and society. It must be extensively regulated.” Or words to that effect, are a common sentiment. They must be kidding.
Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the steam engine to the airliner, from the farm to the factory to the office tower, from agriculture to manufacturing to services, from leeches and bleeding to cancer cures and birth control, from abacus to calculator to word processor to mainframe to internet to social media, nobody has ever foreseen the outcome, and especially the social and political consequences of new technology….
Sure, in each case one can go back and find a few Cassandras who made a correct prediction—but then they got the next one wrong. Before anyone regulates anything, we need a scientifically valid and broad-based consensus.
Yes, but there should also be presumptions (1) that innovations generally do not require custom regulation, and (2) that those instances where such regulation is required will be better identified through using the innovation than guesswork ahead of time.
Cochrane provides some good examples of how regulation works infinitely better as a result of experience rather than prediction:
Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits. There has always been time to learn, to improve, to mitigate, to correct, and where necessary to regulate, once a concrete understanding of the problems has emerged. Would a preemptive “safety” regulator looking at airplanes in 1910 have been able to produce that long experience-based improvement, writing the rule book governing the Boeing 737, without killing air travel in the process?
Cochrane accepts that new technologies often have turbulent effects, dangers, and social or political implications, a catch-all list that extends beyond immediate physical danger. But can we realistically hope to identify them with sufficient accuracy to avoid over-regulation? “If our regulators,” he writes, “had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass ‘effect on society and democracy’ rules, we would still be riding horses and hand-plowing fields.”
He asks:
Is there a single example of a society that saw a new developing technology, understood ahead of time its economic effects, to say nothing of social and political effects, “regulated” its use constructively, prevented those ill effects from breaking out, but did not lose the benefits of the new technology?
And:
There are plenty of counterexamples—societies that, in excessive fear of such effects of new technologies, banned or delayed them, at great cost. The Chinese Treasure fleet is a classic story. In the 1400s, China had a new technology: fleets of ships, far larger than anything Europeans would have for centuries, traveling as far as Africa. Then, the emperors, foreseeing social and political change, “threats to their power from merchants,” (what we might call steps toward democracy) “banned oceangoing voyages in 1430.” The Europeans moved in.
That example also highlights another issue. China’s retreat inward probably served the ruling dynasty (Ming) and its successor (Qing) very well until the state’s backwardness left it at the mercy of predators, including the European states who (in part) had been given a greater opportunity to advance by China’s withdrawal. For ordinary Chinese, however, the turn away from the outside world served them poorly. Who defines “acceptable” risk matters a great deal.
The agricultural innovations now described as the green revolution would, one would think, universally be regarded as a good. But no. When I wrote about Degrowth last year, I discovered that some environmentalists were not so sure:
Its entry in Wikipedia now includes this: “Recent research demonstrates that the green revolution is a major contributor to exceeded planetary boundaries, including increased greenhouse-gas emissions.”
“Should,” I asked, “the starving have done the decent thing and died?”
Those lines have disappeared for Wikipedia (for now), but we can be sure that the thought remains. The current entry includes this:
[C]oncerns often revolve around the idea that the Green Revolution is unsustainable, and argue that humanity is now in a state of overpopulation or overshoot with regards to the sustainable carrying capacity and ecological demands on the Earth.
Then again:
A 2021 study found, contrary to the expectations of the Malthusian hypothesis, that the Green Revolution led to reduced population growth, rather than an increase in population growth.
Predicting the social, economic, or political consequences of innovation is not easy, something AI’s regulators (or would-be regulators) would do well to remember, if, of course, they care to. Regulation is not, Cochrane reminds us, devised by the “far-sighted, dispassionate, and perfectly informed,” a fact that can also help explain regulation’s many failures, as can hubris. Bank regulators’ attempt to reducebanking risk in the late 1990s by introducing elaborate ”risk-weighting” did a great deal to pave the way to the financial crisis of 2008.
Much of the focus of the current drive to regulate AI revolves around Large Language Models (LLMs), its most visible face. As Cochrane explains, they:
[A]re fundamentally a new technology for communication, for making one human being’s ideas discoverable and available to another. As such, they are the next step in a long line from clay tablets, papyrus, vellum, paper, libraries, moveable type, printing machines, pamphlets, newspapers, paperback books, radio, television, telephone, internet, search engines, social networks, and more. Each development occasioned worry that the new technology would spread “misinformation” and undermine society and government, and needed to be “regulated.”
Let’s not mess around with euphemism. As Cochrane observes, “regulating” communication “means censorship.” So, who are these censors to be, and what ends are they serving? And even if they are meant be concentrating on (supposedly) a narrow area such as the elimination of misinformation, it should not take more than a second to realize that eliminating misinformation is, at its core, about determining what is or is not “true,” a very big question indeed.
Cochrane:
Our aspiring AI regulators are fresh off the scandals revealed in Murthy v. Missouri, in which the government used the threat of regulatory harassment to censor Facebook and X. Much of the “misinformation,” especially regarding COVID-19 policy, turned out to be right. It was precisely the kind of out-of-the-box thinking, reconsidering of the scientific evidence, speaking truth to power, that we want in a vibrant democracy and a functioning public health apparatus, though it challenged verities propounded by those in power and, in their minds, threatened social stability and democracy itself. Do we really think that more regulation of “misinformation” would have sped sensible COVID-19 policies?
No. In the course of a recent article for National Review on the disinformation panic, I wrote this:
Malinformation [a variety of mis/disinformation discussed in the article], the [British.] Government Communication Service recounts, “can be challenging to contest because it is difficult to inject nuance into highly polarized debates.” If it’s too challenging, that’s a sign that the real objection may be to disagreement, not to disinformation. This could be counterproductive and, in an epidemic, lethal. Crowdsourcing ideas to take advantage of the collective intelligence available online makes sense. Insisting that there can be only one answer frequently does not.
In the same article, I touched on the way that the notion of dis-or-misinformation can be abused, like this, for example.
But panic over disinformation (whatever its source) has been too useful to be allowed to let drop. A helpful complement to conveniently flexible “hate,” it has been a handy rationale for greater control over internet speech. It has accelerated the rise of “fact-checkers,” who all too often are propagandists and censors masquerading as guardians of objectivity. Their biases are insufficiently examined (not that they are hard to guess).
Regulation, writes Cochrane, “naturally bends to political ends. The Biden Executive Order on AI insists that “all workers need a seat at the table,” including through collective bargaining and “AI development should be built on the views of workers, labor unions, educators, and employers.”
There’s more of where that comes from. It may be a good thing, it may be a bad thing, but it is a partisan thing. The same can be said about the U.N.’s attempts to offer its guidance on the “governance” of AI. The idea that a largely unaccountable organization packed with authoritarians, rentseekers and social justice warriors should be allowed anywhere near the “governance” of, in the end, speech is appalling.
AI can as, as Cochrane points out (citing the example of Google’s Chrome fiasco), “be tuned to favor one or the other political view”:
Do we really want a government agency imposing a single tuning, in a democracy in which the party you don’t support eventually might win an election? The answer is, as it always has been, competition. Knowing that AI can lie produces a demand for competition and certification. AI can detect misinformation, too. People want true information, and will demand technology that can certify if something is real.
What they are not asking for (or at least they should not be asking for) is someone to decide for them who to consult on what is true and, at the same time, ban or otherwise restrict the circulation of any alternative views.
“Regulation,” maintains Cochrane, is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.” Indeed. And “aristocratic paternalism” is what is driving those stepping forward to act as the AI police. Naturally they would deny this, claiming instead that they are acting in the interest of democracy.
For example, posting on X, Sigal Samuel, a senior reporter at Vox, complains “that OpenAI is building tech that aims to totally change the world without asking if we consent. It’s undemocratic.” “We,” again. Leaving aside the fact that we do not know whether OpenAI will “totally change the world,” the company is doing what it is doing under a system of laws passed by democratic legislatures.
What Samuel wants is bespoke treatment for certain, yet undefined categories of innovation that certain yet undefined people have deemed to be significant enough according to yet undefined criteria (“totally change the world” is lacking in rigor) to warrant special permission before any next step may be considered. It is an idea as absurd as it is sinister as it is retrograde. It conjures up images of Thomas Edison suspending his work while he waits to receive the necessary permission to proceed.
Inevitably, Cochrane turns his attention to AI’s potential to advance economic and scientific progress. He also looks at what it could mean for jobs, an area where he is less concerned than I am about the danger of AI-created unemployment and its consequences (in the 2016 article I link to I was writing about automation more generally, but my worries are only reinforced by the prospect of AI).
Cochrane correctly highlights both the futility and the danger of trying to manage the problems that AI may bring by attempting to regulate known or unknown unknowns. The danger flowing from such attempts can be seen, in part, from comments by Daron Acemoglu, an MIT economist quoted by Cochrane, including this:
Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for.
Cochrane retorts:
Who is to determine “proper balance”? Balancing “pro-innovation public policies and democratic input” is Orwellianly autocratic. Our task was to save democracy, not to “balance” democracy against “public policies.” Is not the effect of most “public policy” precisely to slow down innovation in order to preserve the status quo? “We” not “leave[ing] it to tech entrepreneurs” means a radical appropriation of property rights and rule of law.
Cochrane is right to be concerned. The “aristocratic paternalists” (I can think of ruder terms) are no friends of free speech, and they are no friends of the unruly democracy that largely unfettered (there will always be rules governing libel, certain types of pornography, incitement, and so on) free speech produces. The revolution in information technology of recent years has bypassed the old establishment gatekeepers’ control over who gets to have their say in the public square. The gatekeepers are pushing back. One front in that pushback has revolved around “misinformation” and another, clearly, will focus on the control of AI.
Cochrane recognizes that “AI is not perfectly safe.” In his view, “it will lead to radical changes, most for the better but not all.” He forecasts that it will “affect society and our political system, in complex, disruptive, and unforeseen ways.” He asks how we should adapt and puts his faith in competition:
The government must enforce rule of law, not the tyranny of the regulator. Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process. Remain a government of rights, not of permissions. Trust and strengthen our institutions, including all of civil society, media, and academia, not just federal regulatory agencies, to detect and remedy problems as they occur.
That’s correct, with the proviso that trust should not equate to blind faith. It is also up to users to see how AI products or AI-enhanced products work for them, and to avoid those that do not.
The Capital Record
We released the latest of our series of podcasts, the Capital Record. Follow the link to see how to subscribe (it’s free!). The Capital Record, which appears weekly, makes use of another medium to deliver Capital Matters’ defense of free markets. Financier and National Review Institute trustee, David L. Bahnsen hosts discussions on economics and finance in this National Review Capital Matters podcast, sponsored by the National Review Institute. Episodes feature interviews with the nation’s top business leaders, entrepreneurs, investment professionals, and financial commentators.
In the 189th episode, David is joined by Dr. Andrew Abela, dean of the Busch School of Business at Catholic University in Washington, D.C., and author of the brand new book, “Superhabits: The Universal System for a Successful Life.” They discuss the theology of virtues, the relevance of the topic to human flourishing, and the special application of such in today’s business culture. It is a discussion to dive into, with faithfulness and hope.
The Capital Matters week that was . . .
Price controls
Kamala Harris promises us that when she’s president, the government will crack down on greedy businesses that practice “price-gouging.” Donald Trump tells voters that he’ll put a cap on interest rates charged by greedy credit-card companies on unpaid balances.
Let’s see, is there any experience in human history with governmental edicts intended to keep prices down?…
The Debt
The Treasury Department released its monthly statement for August, and alarm bells should be ringing in Washington. Federal finance is quickly spiraling out of control, with interest payments on the debt now exceeding $1 trillion each year and growing. Kicking the deficit-spending can down the road is not something we can do for long. Runaway government spending is already turning us into a nation of debt slaves.
Labor
Ports on the East Coast and Gulf Coast will come to a halt on October 1 if the International Longshoremen’s Association (ILA) follows through on its plans to go on strike. Negotiations for a new six-year labor contract have been on pause since June, when the ILA walked away from the table. The ILA represents roughly 47,000 dockworkers, nearly the entire longshore workforce for ports on the Atlantic and the Gulf of Mexico…
Federal Mandates
Automobiles may be the most important machine at risk of being targeted with a misguided manufacturing mandate. The 2024 Harris presidential campaign, although not the candidate herself, denies that Harris supports a federal “EV mandate” requiring that all new vehicles purchased be electric. But the Biden-Harris administration has already imposed manufacturing standards that would require half of new sales to be electric by 2030, even though the current EV sales percentage remains below 10 percent. As a senator, she co-sponsored a “zero-emission” bill that would effectively require all new car sales in the year 2040 and beyond to be electric. Several blue states are imposing EV mandates, or banning the sale of new gas vehicles, to take full effect by 2035: California, Connecticut, Maine, Maryland, Massachusetts, Minnesota, New Jersey, Nevada, New Mexico, New York, Oregon, Rhode Island, Vermont, and Washington. Similar rules apply in the U.K. and all of the European Union, with 2035, again, being the year in which they take full effect. These are all reasons to be skeptical of Democrats running for federal office who claim to oppose EV mandates…
Electric Vehicles
I knew there was a reason why the great exercise in central planning that is our electric vehicle (EV) transition was running into trouble. As so often (Comrade Stalin has some stories), it is not that there was anything wrong with the great, brilliantly conceived plan, but that it has actively been sabotaged, in this case by thieves…
In the most recent Capital Letter, I wrote about growing signs of the disaster that may be facing European carmakers as a result of the coerced transition to electric vehicles (EVs).
Italy’s government has been sounding the alarm for some time. The country’s industry minister, Adolfo Urso, has now returned to this theme, describing the EU’s ban on sales of new traditional cars after 2035 as posing a “grave crisis” for its auto sector. He wants the ban (and, it can be assumed, the forced ratcheting down of conventional sales that precedes it) to be reviewed and revised…
Inequality
Before you even watch, the text in the post alone should prompt a question: When, exactly, did the top 1 percent not have the largest share of wealth? By definition, no matter how much wealth there is, the top 1 percent will have the most wealth.,,
My guess is that video didn’t mention something that has done a lot more than any Milton Friedman essay to increase measured wealth inequality: the expansion of Social Security benefits. A recent paper showed “that top wealth shares have not changed much over the last three decades when Social Security is properly accounted for.”
Immigration
Overall, immigration has both positive and negative effects, something rarely acknowledged by advocates on either side. An intelligent approach would try to minimize the negatives, for example by keeping out immigrants with ties to radical anti-Western groups or who lack employable skills, while looking to attract newcomers with the right skills, work ethic, and entrepreneurial gifts…
Energy
For decades, “Three Mile Island” has been an argument against nuclear energy. That’s changing, thanks to a deal just struck between Microsoft and Constellation Energy to restart the operation of the Three Mile Island Unit 1 reactor. While the 1979 partial nuclear meltdown at the location still clouds public memory, the benefits of bringing the reactor back online — for local jobs, clean energy, and the economy — are sure to demonstrate that the biggest accident at Three Mile Island was our taking it offline…
Inflation
Reading through Harris’s policy proposal on “price gouging,” Michael Strain concludes that it is probably a bad idea but not nearly as terrible as it sounds. A key reason for this conclusion: “Her regulations would kick in only during emergencies.” Arguably, that’s when price signals are especially important. But let’s leave that aside, and instead take a look at how Harris is promoting the plan…
Net Zero
Former British prime minister Boris Johnson was not known for his attachment to the truth, but on one occasion, at least, he was, if accidentally, accurate.
Speaking in 2021, he claimed that Britain’s pursuit of net-zero greenhouse-gas emissions was “about growth and jobs.” He was right, but not in the way he meant. Net zero is “about” growth: It destroys it. And it is “about” jobs too: It destroys them…
Tariffs
Daniel Hannan, in a speech to the House of Lords, wants to know why British citizens are being taxed extra when they buy tomatoes from Morocco.
Aside from having been a member of the European Parliament before Brexit, which he championed, Hannan is the president of the Institute for Free Trade. For all of protectionists’ talk about “strategic” tariffs on “critical” goods, actual protectionist measures currently on the books often make no sense and don’t even protect anything…
I’ve been critical of Trump’s (and Biden’s) support for tariffs, so I was interested to read a defense of the former president’s trade plans that would challenge my view. Naturally, I turned to that reliable source of soundly reasoned Trump apologetics: the Atlantic.
There, Oren Cass of American Compass has written a piece funded by the left-wing Hewlett Foundation: “Trump’s Most Misunderstood Policy Proposal: Economists aren’t telling the whole truth about tariffs.”…
The Biden-Harris administration has fired the latest volley in the ongoing trade war with China, imposing restrictions on what is called the de minimis exemption on imports. The Latin expression essentially means “too trivial to worry about,” and the exemption means that goods worth under $800 imported from abroad don’t have to go through the customs procedures that more expensive goods do. While the administration has dressed up its order in high-minded terms, the change is blatantly protectionist. The people who bear the costs will not be the Chinese government or even Chinese companies but middle-class American consumers…
To sign up for The Capital Letter, please follow this link.