The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

Lawyer used ChatGPT for litigation: AI ruined everything

'30.05.2023'

Olga Derkach

Subscribe to ForumDaily NewYork on Google News

Roberto Mata is suing Avianca alleging he was injured when a metal serving cart hit him in the knee while flying to New York's Kennedy International Airport. But the lawyers decided to take the easy route and used ChatGPT in their case. What came of it, said the publication The New York Times.

When Avianca asked a Manhattan federal judge to dismiss the case, Mata's lawyers objected vehemently, filing a 10-page report citing more than half a dozen relevant court decisions. There were Martinez v. Delta Air Lines, Zikerman v. Korean Air Lines, and Varghese v. China Southern Airlines.

There was only one catch: no one - not the airline's lawyers, not even the judge himself - could find the solutions or quotes given in the brief.

This is because ChatGPT came up with everything.

The lawyer who wrote the report, Steven Schwartz of the firm Levidow, Levidow & Oberman, surrendered to the court's mercy, stating in an affidavit that he had used an artificial intelligence program to conduct his legal research.

Schwartz, who has been practicing law in New York for three decades, told Judge Kevin Castel that he had no intention of defrauding the court or the airline. Schwartz said he had never used ChatGPT and "so was unaware of the possibility that the content could be spurious."

He told the judge that he even asked the program to confirm that these cases were real. and ChatGPT gave a positive answer.

Schwartz said he "really regrets" relying on ChatGPT "and will never do so in the future without absolute data authentication."

Judge Castel said he was presented with "unprecedented circumstances," a legal ruling rife with "false judgments and false citations." He scheduled a hearing for June 8 to discuss possible sanctions.

As artificial intelligence takes over the online world, it creates dystopian images of computers replacing not only human interaction but human labor. Many people worry that their daily activities may be replaced by AI.

Stephen Gillers, a professor of legal ethics at NYU School of Law, said the issue is particularly acute among lawyers who debate the value and danger of artificial intelligence and the need to verify any information it provides.

On the subject: Learn English with Chat-GPT: how to learn a foreign language using artificial intelligence

“The discussion is now about how to avoid exactly what this case describes,” Gillers said. "You can't just take the AI ​​response and paste it into court documents."

The real case of Roberto Mata v. Avianca Inc. shows that the lawyers still have enough time before the robots take over.

How it all came to light

Mata was a passenger on Avianca Flight 670 from El Salvador to New York on August 27, 2019, when an airline employee hit him with a serving cart, according to the lawsuit. After Mata sued, the airline filed paperwork asking them to close the case because the statute of limitations had expired.

In a memo filed in March, Mata's lawyers said the trial should continue, backing up their arguments with references and citations from many court decisions.

Avianca's lawyers soon wrote to the judge that they could not find the cases mentioned in the brief.

When it came to Varghese v. China Southern Airlines, they said they "could not find this case or any other case that bears any resemblance to it."

They pointed to a lengthy quote from Varghese's alleged decision contained in the brief. "The undersigned was unable to locate this quote or anything similar," Avianca's lawyers wrote.

Judge Castell ordered Mata's lawyers to provide copies of the decisions mentioned in their memo. Lawyers submitted a collection of eight; in most cases, they indicated the court and judges who issued them, numbers and dates.

The copy of Varghese's alleged decision, for example, is six pages long and says it was written by a member of a panel of three judges for the 11th circuit. But Avianca's lawyers said they were unable to find this or other cases in court documents or legal databases.

Bart Banino, a lawyer for Avianca, said his firm Condon & Forsyth specializes in aviation law and that its lawyers can say the cases in question are untrue. He added that they have a suspicion that a chatbot might be involved.

ChatGPT generates realistic responses by making assumptions about which pieces of text should follow others, based on a statistical model that contains billions of examples of text taken from around the web. In Mata's case, the program seemed to recognize the intricate structure of the written legal argument, but filled it in with names and facts from a variety of existing cases.

Judge Castel conducted his own investigation. He stated that the 11th arrondissement clerk confirmed that the dossier number printed on Varghese's alleged conclusion was related to an entirely different case.

Describing the conclusion as "fake", Judge Castel noted that it contained internal references and quotations which, in turn, did not exist. He said that five other examples provided by Mata's lawyers were also found to be false.

Mata's lawyers gave sworn statements containing their version of what happened.

Schwartz said he originally filed Mata's lawsuit in state court, but after the airline took him to federal court in Manhattan, where Schwartz is not allowed to practice, one of his colleagues, LoDuka, became a lawyer. Schwartz said he continued to do legal research, in which LoDuca was not involved.

Schwartz said he consulted ChatGPT "to supplement" his own work, and that in "consulting" with him, he found half a dozen non-existent cases. He said that ChatGPT assured him that all cases were real.

Subscribe to ForumDaily NewYork on Google News
WP2Social Auto Publish Powered By: XYZScripts.com