Ever since ChatGPT's launch, it has had a steady rise in popularity considering how useful it can be. There have already been jobs that the chatbot replaced, in fact. However, lawyers being fined $5,000 for using it to cite cases just shows that it can't do everything for you.
Lawyers Get Fined
Without the knowledge that ChatGPT has the tendency to fabricate information, two lawyers from the firm Levidow, Levidow, & Oberman decided to use ChatGPT to find cases to cite, which resulted in a federal judge throwing out the case.
The two will collectively have to pay a $5,000 fine, as well as send a letter to the real judges that were indicated as the authors of the fake citation made by the chatbot, as mentioned in Ars Technica. Lawyers Steven Schwartz and Peter LoDuca were both called out for their actions.
US District Judge Kevin Castel stated that the two abandoned their responsibilities after submitting fake judicial opinions along with made-up quotes and citations, adding that they advocated for the fabricated citation after being told that they could not be found.
The district judge further described the cases cited by the chatbot to be "gibberish," and that its submission not only wasted the time of the court and harmed the lawyer's client, but risked harming the reputation of the judges and courts in the fake citations as well.
The AI-generated citations were brought before a judge last month, with Schwartz admitting that he did not verify whether the information provided by the chatbot was genuine. The lawyer says that he has not used ChatGPT in other cases before the incident.
Steven Schwartz was representing the plaintiff in the Roberto Mata vs. Avianca case, which was first filed in a New York state court. He had been practicing law for 30 years, and claimed that he was "unaware of the possibility that its content could be false."
Peter LoDuca was involved due to Schwartz not being admitted to practice in a federal court. While the latter went on and wrote the legal briefs for the case, it was filed under the LoDuca's name, which the district judge said was done in "bad faith."
OpenAI Executive Already Admitted to the Flaw
If the lawyers took the time to research the OpenAI chatbot, they would've seen that the company's chief technology officer, Mira Murati, has already stated that the AI chatbot might "make up facts" when writing sentences.
Murati said that ChatGPT generates its responses by predicting the logical next word in a sentence, but what's logical to the AI might not be true, as mentioned in Business Insider.
This was mentioned in early February whereas the court incident happened in May.
The OpenAI executive said that in order to further train the chatbot, users can "challenge responses" if they believe that the generated answer was incorrect. This is where communicating through dialogue becomes useful since it is an easy way to train the chatbot.