THE AMERICA ONE NEWS
May 31, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
The Liberty Loft
The Liberty Loft
1 Jun 2023
Bob Unruh


NextImg:Judge blasts filing – after being doctored by AI – for having 'bogus' information!

A judge has blasted a recent filing in federal court in Manhattan after discovering it was doctored by artificial intelligence and contained “bogus” information.”

As in “made-up” cases and citations.

According to a New York Post, the controversy is just the latest to involve AI, which has made abrupt tech advances in recent months, to the point experts are warning that its development needs to be paused for now.

The Post reported it was a lawyer from a “respected Tribeca firm” who conceded recently his filing “was written with the help of an artificial intelligence chatbot on his behalf.”

It was Steven Schwartz, who is with Levidow, Levidow & Oberman, who admitted he asked ChatGPT to find cases relevant to his own case, “only for the bot to fabricate them entirely,” the report said.

The dispute was over a case filed by Schwartz’s partner, Paul LoDuca, against Avianca airlines on behalf of Robert Mata, who claimed an injury from a metal serving cart.

The airline asked the court to toss the action, and Schwartz “filed a brief that supposedly cited more than a half dozen relevant cases,” the report said.

But those cases, Miller v. United Airlines, Petersen v. Iran Air and Varghese v. China Southern Airlines, and others were fabricated by ChatGPT, the report said.

Judge P. Kevin Castel admitted this situation created an “unprecedented circumstance.”

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Castel wrote. He set a June 8 hearing to consider how to handle it.

Schwartz has filed an affidavit that his “source” has proven to be “unreliable,” and explained he was “unaware” that ChatGPT could return lies.

A report from Denver Channel 7 said a Texas judge, Brantley Starr, because of the New York fiasco, ruled any lawyer presenting a case in federal court for the Northern District of Texas “must confirm that no parta of their filing was drafted by generative AI.”

If so, then the lawyer must affirm it was fact-checked by a person.

His order said, “These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.”

This article was originally published by the WND News Center.

This post originally appeared on WND News Center.