


Amid allegations from both sides of the political aisle that “misinformation” by opponents may alter the results of this fall’s presidential election, Hillary Clinton recently called for the repeal of Section 230 of the Communications Decency Act.
Section 230 is the liability shield that puts legal responsibility for third-party posts on the authors of the content instead of on the internet platforms and websites that host the material. Signed into law by President Bill Clinton in 1996, the rule has allowed for the enormous growth of large social media companies, such as Facebook, YouTube, and X. Section 230’s protections have also kept user reviews and comment sections on all-sized websites from becoming prohibitively expensive legal risks for hosts.
But that egalitarian explosion of speech online has also resulted in so-called misinformation, which has some worried about its impact on election results.
Yet even defining what is and is not misinformation remains unclear. Dictionary.com named “misinformation” its word of the year in 2018, explaining that it meant “false information that is spread, regardless of whether there is intent to mislead.” That leaves little clarity for sorting through opinions of the minority, parody, conspiracy theories, and unsettled science.
The term, alongside “fake news,” gained prominence in the online context with allegations of attempted Russian interference in the 2016 presidential elections on Facebook. Government guidance and actions to influence social media’s content moderation during the coronavirus pandemic fueled the flames over what qualifies as misinformation, as did some people’s concerns over election integrity in 2020.
Social media companies have approached the challenge in different ways. Meta spent millions of dollars forming its Oversight Board to help review and establish content moderation policies. X, under the ownership of Elon Musk, introduced Community Notes. Almost all platforms have terms of service outlining what content will and will not be allowed, but the application of those terms sometimes proves controversial.
In politics, what constitutes “misinformation” may partially be in the eye of the accuser. Recently, the term was invoked to criticize allegations by former President Donald Trump that Federal Emergency Management Agency spending was diverted from Hurricane Helene victims in North Carolina to spend on services to illegal immigrants in other areas of the country. Sen. J.D. Vance (R-OH), the Republican vice presidential candidate, echoed the former president’s claim, among others critical of the current administration’s response to the hurricane, in an op-ed he penned for the Wall Street Journal.
President Joe Biden addressed the press and the nation to reject those claims, calling them “disinformation” and “un-American.” He also condemned Rep. Marjorie Taylor Greene’s (R-GA) post on X in which she wrote, “Yes they can control the weather,” apparently about hurricane damage affecting the presidential election.
Sen. Tom Cotton (R-AR) took to Fox News to respond to the FEMA funding pushback, “Democrats accuse something of being ‘misinformation’ if it reflects poorly on Democrats,” taking the debate around the term onto the more traditional media of cable news.
The latest frontier of defining misinformation is around artificial intelligence in electioneering. Beyond a debate about whether Section 230 applies to AI-generated content carried online, there is growing concern and political tussling over “deepfakes” and their impact on voters. Legislation was introduced in both chambers of Congress this session, but nothing has passed into law so far. First Amendment advocates warn that government regulatory overreach should be avoided in this evolving area to prevent free speech infringements.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
This summer, the Federal Election Commission declined to act to regulate AI in political ads in this election cycle, citing the need for Congress to pass a law giving the FEC clear authority to act. More than half of the states, including California, are not waiting on federal action and have either passed or introduced their own measures to regulate AI in election activity.
Amid the misinformation debate, leading social media platforms are at the heart of any solutions. Richard Gingras, a global vice president of news at Google, articulated the question in a recent blog post: “How do we address the key question, paradox that it is: how to manage free expression in our modern digital age?” He continued, “It is up to us, and our societies, to find the answers — whether in our laws, in our principles, or in our own thoughtful behavior.”