


My last blog highlighted the difficulties faced by web platforms when it comes to censoring content placed on them and intended for circulation around the globe in jurisdictions with vastly different and constantly changing laws and societal standards as to what counts as “speech”—that is, acceptable content to host and circulate.
The illustrative case, featuring a Cyprus-based betting firm, a celebrity New Zealander, and New Zealand’s gambling and internet content distribution laws showed how Google had apparently violated its own rules cited as justification for withdrawing the content. The censorship occurred because Google did not appear to understand New Zealand laws to mean the same thing as the government agency charged with implementing them.
This begs the question of whether there could be a simple set of internationally-applicable principles for web platforms to use when considering whether or not content should be removed.
In search of such principles, I turned to the work of two prominent free speech scholars: New York University Law School’s Nadine Strossen (former President of the American Civil Liberties Union ) and Danish lawyer and human rights advocate Jacob Mchangama (founder and director of the Copenhagen-based think tank Justicia ). Strossen’s 2018 HATE: Why We Should Resist It With Free Speech, Not Censorship and Mchangama’s 2022 FREE SPEECH: A History from Socrates to Social Media both make a clear and cogent case for less, rather than more, online censorship.
Both hold the fundamental human right of free speech as expressed in Article 19 of the United Nations Universal Declaration of Human Rights as the foundation of the debate. This states that: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” Both argue that the best way to address hateful or unpleasant speech is with more speech (principled debate) whereby all speakers and listeners can become better informed. If it is feared that a speaker can use a platform to amplify an unpleasant message, then drawing even more attention to it by banning it is likely to be counterproductive.
Mchangama shows how, over time, any regimes that have sought to close down the expression of selected ideas, or speech by selected groups of individuals, have led to significant harm. Even when censorship has been used to protect vulnerable groups, the effect has in most cases led to harm to minorities and vulnerable populations. For example, laws used in Weimar Republic Germany to prevent Nazi propaganda from being distributed were subsequently used by the Nazis when in government to censor not just Jewish speech but any messages contrary to Nazi policy.
Imprisoning the authors for “hate speech” created the illusion of them as victims of a repressive government, giving publicity not even money could buy in subsequent elections. This led to stalwart efforts by Eleanor Roosevelt to resist exemptions enabling censorship of or punishment for “hate speech” when the UN Declaration was drafted. She rightly feared that such provisions “would only encourage Governments to punish all criticisms in the name of protection against religious or national hostility” and warned, “not to include … any provision likely to be exploited by totalitarian States for the purpose of rendering the other articles null and void.”
Based on her years of extensive research, Strossen asserts that even though hateful messages may cause some harm, and lead to calls for specific rules and laws customized to specific types of speech, specifying precisely what can and cannot be shared (as is required by the AI algorithms used to automatically censor or promote online content) is challenged by the imprecision of language and the ability for it to be twisted from one set of circumstances into others out of context. Hence, university professors have been dismissed for simply citing others’ words in class as examples, as these actions were deemed in breach of university speech codes.
Importantly, Strossen finds that, after examining laws and cases from just about every country, two U.S. principles appear to be the most resilient and effective. The viewpoint neutrality principle bars the government from regulating speech solely because the speech’s message, idea, or viewpoint is disfavored. However, the government may regulate speech when its message inflicts independent harm, “such that there is no realistic possibility that official suppression of ideas is afoot” (hence enabling fraud, perjury, bribery, and pornography to be addressed).
Further, under the emergency test, speech can be suppressed or punished only when it “directly, demonstrably, and imminently causes certain specific, objectively ascertainable serious harms that cannot be averted by non-censorial measures”—notably counterspeech.
As media platforms increasingly become like town squares, these principles seem like a good foundation upon which to build the content moderation debate.
CLICK HERE TO READ MORE FROM RESTORING AMERICAThis article originally appeared in the AEIdeas blog and is reprinted with kind permission from the American Enterprise Institute.