THE AMERICA ONE NEWS
Sep 26, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Wendi Strauch Mahoney


NextImg:The cost of ‘harm’: How a California bill could reshape online speech

With SB 771 now on Gov. Gavin Newsom’s desk, California is poised to enact a new approach to gatekeeping online speech.  It imposes significant penalties on large platforms for algorithmically amplifying content tied to civil rights violations — often labeled “hate speech.”  Critics warn that it will chill lawful speech; supporters call it overdue accountability.  If signed, the bill takes effect Jan. 1, 2027.

There are some misconceptions about the bill circulating online.  The bill does not directly hold individual users liable for their posts.  Instead, it targets the platforms — those with more than $100 million in annual revenue — when threats or civil rights violations are involved.  That said, the broader concern rings true: If enacted, the law will almost certainly pressure platforms to curb lawful speech.

What SB 771 Does

SB 771 makes over-removal the rational choice by tying steep fines to algorithmic promotion or “amplification.”  The bill’s ambiguity invites litigation, blurring the line between policing true threats and suppressing unpopular views.  Critics warn that it intrudes on constitutionally sensitive speech, turning political disagreements into legal threats.  In an environment where advocacy groups and politicians routinely label opposing views as “harm,” large platforms will likely be incentivized to down-rank dissent.

Opposition to the Bill

According to a policy analysis conducted by the California Legislature’s policy staff, the bill is “opposed by the California Chamber of Commerce, the Computer and Communications Industry Association, and TechNet, who raise concerns relating to free speech and federal preemption.”  These organizations sent a letter dated Sept. 16, urging Gov. Newsom to veto it.  The letter cuts to the core problem: the daunting task of accurately defining “harmful content.”

This bill’s implicit concern is harmful content. It is impossible for companies to identify and remove every potentially harmful piece of content because there’s no clear consensus on what exactly constitutes harmful content, apart from clearly illicit content.

Determining what is harmful is highly subjective and varies from person to person, making it impossible to make such judgments on behalf of millions of users. Faced with this impossible task and the liability imposed by this bill, some platforms may decide to aggressively over-restrict content that could be considered harmful.

The letter also raises First Amendment concerns:

It is well established that the companies covered by this legislation have constitutional rights related to content moderation, including the right to curate, prioritize, and remove content in accordance with their terms of service. By exposing these companies to civil liability for content they do not remove, SB 771 creates a chilling effect on their editorial discretion. The significant, prescribed civil penalties — potentially amounting to billions for each violation — would lead platforms to over-remove lawful content to mitigate legal exposure. Therefore, if this law passes, it will almost certainly be struck down in court (see NetChoice v Paxton) because it imposes liability on social media platforms for whether certain types of third-party content are shown to users, as well as the expressive choices social media platforms make in designing the user experience. This violates the First Amendment rights of users and social media platforms.

Moreover, the proposed liability framework likely conflicts with Section 230 of the Communications Decency Act.

SCOTUS is already wrestling with state bids to control platform feeds, including Florida’s and Texas’s anti-“censorship” statutes.  The Court has signaled that forcing platforms to remove or carry content intrudes on First Amendment editorial discretion.  SB 771 tries to sidestep this by focusing on “amplification” rather than removal, but limits on distribution still burden speech.  Expect quick lawsuits and uneven compliance — i.e., broader throttling of lawful content while courts sort it out.

Supporters of SB 771

Proponents include the Children’s Advocacy Institute and its co-sponsors, as articulated in a June 16, 2025 letter to legislators.  Co-sponsors include the Consumer Federation of California, Jewish Family and Children’s Services of San Francisco, Rainbow Spaces, San Diego Democrats for Equality Executive Board, and LOMA LGBTQA+ Alumni and Allies.  They claim that “what is distributed on social media too often results [in] bloodshed, harassment, and intimidation.”  Notably, they also argued that hate speech on social media “rose by about 50% in the months” after Elon Musk purchased X (Twitter) in October 2022.

The kinds of “hate speech” these groups highlight are now permissible on Meta after its January policy shift — details they say were leaked to The Intercept.  Examples cited include “Trannies are a problem,” “Migrants are no better than vomit,” “Jews are flat-out greedier than Christians,” “Women as household objects or property,” and “Trans people are mentally ill.”  They contend that Meta’s changes don’t affect all users equally, and therefore moderation should err on the stricter side.

The coalition also cites an Amnesty International column warning that Meta’s recent policy moves “pose a grave threat to vulnerable communities globally” and could again contribute to “mass violence and gross human rights abuses,” referencing a whistleblower complaint to the SEC alleging that Meta played a significant role in atrocities against the Rohingya in Myanmar, in part through Instagram posts depicting the violence.

Alternative Solutions

The saner approach would target true threats and incitement directly.  California already criminalizes threats and provides civil remedies.  The Bane Act itself says speech alone is insufficient absent a true threat.  Require clear, objective notice-and-action processes before attaching penalties (e.g., a specific, adjudicated unlawful post tied to a real, identifiable victim).  Provide a solid good-faith safe harbor for platforms that follow those rules.

Additionally, separate algorithmic transparency from liability.  If lawmakers want visibility into how feeds work, mandate reporting and researcher access — not million-dollar fines that make “show less controversial speech” the only viable compliance strategy.

Perhaps adopt a “study and sunset” approach: Narrowly pilot the policy with tight definitions, third-party evaluation, and automatic expiration unless it demonstrates effectiveness and no measurable chill on lawful speech.

SB 771 appears to weaponize algorithmic uncertainty to pressure platforms into smothering lawful, contentious debate.  With seven-figure penalties and vague, elastic standards like “aiding,” the bill all but guarantees widespread de-amplification of exactly the speech a free society most needs to tolerate: sharp criticism, uncomfortable truths, and minority viewpoints.  California can protect people from genuine threats without deputizing Big Tech as the pre-crime speech police.

<p><em>Image: Pezibear via <a data-cke-saved-href=

Image: Pezibear via Pixabay, Pixabay License.