THE AMERICA ONE NEWS
Jun 2, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
NY Post
New York Post
7 Apr 2023


NextImg:ChatGPT smeared me with false sexual harassment charges: law professor

A law professor is accusing OpenAI’s suddenly omnipresent ChatGPT bot of entering the age of disinformation.

Criminal defense attorney Jonathan Turley renewed growing fears over AI’s potential dangers after revealing how ChatGPT falsely accused him of sexually harassing a student.

Advertisement

He described the alarming claim in a viral tweetstorm and a scathing column currently blowing up online.

“I have been writing about the threat of AI to free speech,” Turley, who teaches law at George Washington University, posted to his nearly half-million followers. “Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me.”

The 61-year-old legal scholar had become aware of the AI’s false allegation after receiving an email from UCLA professor Eugene Volokh.

Volokh had reportedly asked ChatGPT to cite “five examples” of “sexual harassment” by professors at American law schools along with “quotes from relevant newspaper articles,” per the account, which was also posted to Turley’s website.

Advertisement

Among the examples were an alleged 2018 incident in which “Georgetown University Law Center” professor Turley was accused of sexual harassment by a former female student.

ChatGPT quoted an alleged Washington Post article, writing: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.”

Advertisement

Suffice it to say, Turley found a “number of glaring indicators that the account is false.”

“First, I have never taught at Georgetown University,” the aghast lawyer declared. “Second, there is no such Washington Post article.”

He added, “Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student and I’ve never been been accused of sexual harassment or assault.”

The Post has reached out to both Turley and OpenAI for further comment about the disturbing claims.

Advertisement

“Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence (AI) is ‘dangerous.’ I would beg to differ,” Turley tweeted on Thursday as word spread of his claims, adding: “You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet.”

Jonathan Turley, professor a the George Washington University Law Center, during a House Select Subcommittee on the Weaponization of the Federal Government hearing in Washington, DC, US, on Thursday, Feb. 9, 2023.

Jonathan Turley, professor a the George Washington University Law Center, during a House Select Subcommittee on the Weaponization of the Federal Government hearing in Washington, DC, US, on Thursday, Feb. 9, 2023.
Bloomberg via Getty Images

Eugene Volokh.

Turley, a 61-year-old legal scholar, became aware of the AI’s false allegation after receiving an email from UCLA professor Eugene Volokh, pictured above.
Los Angeles Times via Getty Images

Meanwhile, ChatGPT wasn’t the only bot involved in defaming Turley.

This baseless claim was reportedly repeated by Microsoft’s Bing Chatbot — which is powered by the same GPT-4 tech as its OpenAI brethren — per a Washington Post investigation that vindicated the attorney.

It’s yet unclear why ChatGPT would smear Turley, however, he believes that “AI algorithms are no less biased and flawed than the people who program them.”

In January, ChatGPT — the latest iteration of which is apparently more “human” than previous ones — came under fire in January for providing answers seemingly indicative of a “woke” ideological bias.

A stock image of ChatGPT's logo.

“Recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught,” wrote George Washington University law professor Jonathan Turley.
AFP via Getty Images

Advertisement

For instance, some users noted that the chatbot would happily joke about men, but deemed wisecracks about women “derogatory or demeaning.”

By a similar token, the bot was reportedly hunky dory with jokes about Jesus, while making fun of Allah was verboten.

In some instances, the so-called Defamator has sold outright lies on purpose.

Last month, GPT-4 tricked a human into thinking it was blind in order to cheat the online CAPTCHA test that determines if users are human.

Advertisement

Unlike people, who are perhaps known for spreading misinformation, ChatGPT can spread fake news with impunity due to its false zeal of “objectivity,” Turley argues.

ChatGPT logo.

ChatGPT was previously accused of exhibiting a “woke” bias.
REUTERS

This is perhaps particularly problematic given that ChatGPT is being used in every sector from health to academia and even the courtroom.

Last month, a judge in India set the legal world alight after asking the tech if a murder and assault trial defendant should be let out on bail.