This article originally appeared on WND.com
Guest by post by Bob Unruh
Cited ‘hallucinated cases’
There’s a warning here somewhere for lawyers, officials, perhaps doctors and certainly students: anyone who as the part of his or her day submits written documents.
Don’t use ChatGPT.
That’s one conclusion after officials announced a 90-day suspension for a Colorado lawyer who submitted a motion to court that “cited case law that he found through the artificial intelligence platform, ChatGPT.”
The problem is that the lawyer, Zachariah C. Crabill, “did not read the cases he found … or otherwise attempt to verify that the citations were accurate.”
The software had given him case citations that either were “incorrect” or simply “fictitious.”
Legal columnist Eugene Volokh explained, “The presiding disciplinary judge approved the parties’ stipulation to discipline and suspended Zachariah C. Crabill (attorney registration number 56783) for one year and one day, with ninety days to be served and the remainder to be stayed upon Crabill’s successful completion of a two-year period of probation, with conditions.”
Crabill had been hired by a client to prepare a motion to set aside judgment in the client’s civil case. Crabill submitted the motion, after using ChatGPT, only to discover later that the cases were fake.
“But Crabill did not alert the court to the sham cases at the hearing. Nor did he withdraw the motion. When the judge expressed concerns about the accuracy of the cases, Crabill falsely attributed the mistakes to a legal intern. Six days after the hearing, Crabill filed an affidavit with the court, explaining that he used ChatGPT when he drafted the motion,” the report said.
That, the decision in the case ruled, amounted to a violation of a legal requirement that a lawyer competently represent a client.
It isn’t the first time such a catastrophe has developed. WND reported a few months ago a judge blasted a filing in federal court in Manhattan after discovering it was doctored by artificial intelligence and contained “bogus” information.”
As in “made-up” cases and citations.
The New York Post reported at the time the controversy is just the latest to involve AI, which has made abrupt tech advances in recent months, to the point experts are warning that its development needs to be paused for now.
The Post reported it was a lawyer from a “respected Tribeca firm” who conceded recently his filing “was written with the help of an artificial intelligence chatbot on his behalf.”
It was Steven Schwartz, who is with Levidow, Levidow & Oberman, who admitted he asked ChatGPT to find cases relevant to his own case, “only for the bot to fabricate them entirely,” the report said.
The dispute was over a case filed by Schwartz’s partner, Paul LoDuca, against Avianca airlines on behalf of Robert Mata, who claimed an injury from a metal serving cart.
The airline asked the court to toss the action, and Schwartz “filed a brief that supposedly cited more than a half dozen relevant cases,” the report said.
But those cases, Miller v. United Airlines, Petersen v. Iran Air and Varghese v. China Southern Airlines, and others were fabricated by ChatGPT, the report said.
Copyright 2023 WND News Center
The post Lawyer Gets Serious Suspension for Filing ‘Fictitious’ Motion with AI appeared first on The Gateway Pundit.