The judge scolded the lawyers for doubling down on their fake citations.
So, they used CHATGPT to do their work, didn’t validate it and and used made up cases to support theirs and when got cought, they lied and got only $5K fine? Wtf?
Depending on the case. They should get jailed for that.
It should be several thousand per false citation, and disbarment for any repeat offense.
which I falsely assumed was, like, a super search engine
A “super search engine” is still a search engine, if you’re incapable of validating the results, or if you don’t know you should, you shouldn’t be a lawyer at all.
Court documents are at https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/
The transcript of the hearing where the judge grilled the lawyers won’t be available to the public for another 2 weeks.
I feel like the lawyers are getting off really easy, considering.
They just have to pay $5k each and notify their client and every judge they “cited” in their made-up cases that they did an oopsy.
Oh and they lost the case, but it seems like that was foreshadowed long before the lawyers decided that ChatGPT was a court docket search engine.
The LegalEagle breakdown was thoroughly entertaining.
Link for those curious. Agreed, the breakdown does far more ‘justice’ to this story.
Everyone wonders why ChatGPT is highly censored, this is a good example as to why. However, maybe instead of “As an AI language model” it should say something like, “Large language models like me tend to hallucinate/make up things and confidently convey them in my response. I will leave it up to you to validate what I say.” The ultimate problem is the general public is treating LLMs like they are super sci-fi AI, they are basically fantastic autocomplete.
Even if you thought it was just a search engine, it’s hard to imagine citing a case without independently validating it first.
Here’s the thing, even if you had zero intention of actually reading a case, there are STILL next steps once you get a cite. There is an entire “skill” you’re taught in law school called Shepardizing (based on an older set of books that helped with this task) where you have to see if your case has been treated as binding precedent, had distinctions drawn to limit its applicability, or was maybe even overturned. Back when I was learning, the online citators would put up handy-dandy green, yellow, and red icons next to a case, and even the laziest law student would at least make sure everything was green before moving on in a Shepardizing quiz without looking deeper. And even THAT was just for a 1-credit legal research class.
These guys were lazy, cheap (they used “Fast Case” initially when they thought they had a chance in state court; it’s a third-rate database that you get for free from your state bar and is indeed often limited to state law), and stupid. They didn’t even commit malpractice with due diligence. I can only assume that they were “playing out the string” and extracting money from their client until the Federal case was dismissed with prejudice, but they played stupid games and won stupid prizes.
Paywall.
Here’s the article on archive.is: https://archive.is/aQhso
I wonder if something like this might get overlooked one of these days.
💀
What makes you think it isn’t already?
I mean I think the above article is a pretty good indicator…
Should be more
Surprised it was only $5,000
Here lies the problem, ChatGPT is not a search engine, instead you can think of it as a compressed JPEG of the Internet (Credits to Ted Chiang). It can get you things that LOOK right if you squint your eyes a bit, but you just can’t be sure that it is not just some random compression artifacts.
The problem is that OpenAI is hyping ChatGPT up as something that it is not.
Hallucination is also why I don’t use AI to write code for me. I either have to check for hallucinations and fix them, or accept wrong results.
I wont let ChatGPT write 100 lines of code but with github copilot its somewhat good occasionally.
Again?!
They need to stop doing that.