An attorney representing a man suing an airline experienced first-hand the limitations of the artificial intelligence tool ChatGPT, which has a tendency to fabricate information. Roberto Mata filed a lawsuit against Colombian airline Avianca, claiming that his knee was injured by a metal food and beverage cart during a flight to Kennedy International Airport in New York. When Avianca requested a Manhattan judge to dismiss the case due to the statute of limitations, Mata’s attorney, Steven A. Schwartz, submitted a brief based on research conducted by ChatGPT, as stated in an affidavit by Schwartz, who works at the law firm Levidow, Levidow & Oberman.
Although ChatGPT can provide valuable assistance to professionals across various fields, including law, its reliability, and capabilities are not without limitations. In the present case, the AI tool generated fictitious court cases and presented them as factual information. The inaccuracies came to light when Avianca’s legal representatives approached the case’s judge, Kevin Castel of the Southern District of New York, informing him that the cases cited in Mata’s lawyers’ brief could not be found in any legal databases.
The non-existent court cases included in the brief were given names such as Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines. Bart Banino, an attorney with Condon & Forsyth, representing Avianca, expressed his suspicion that something was awry when none of the cases cited in the opposition brief were recognizable. Banino informed CBS MoneyWatch that they suspected that the document was created using a chatbot or similar technology.
In a recent affidavit, Schwartz acknowledged that he had “referred to” ChatGPT to enhance his legal research and that the AI tool had “demonstrated its unreliability.” Schwartz further stated that this was the first instance he had employed ChatGPT for professional purposes and was “unaware of the potential for its content to be inaccurate.” He affirmed that he had attempted to verify the validity of the cases presented by the AI tool and that ChatGPT had confirmed their authenticity. Schwartz then requested the AI tool to provide its source.
ChatGPT responded by expressing regret for the previous confusion and provided information that the Varghese case could be found in the Westlaw and LexisNexis databases. Judge Castel has scheduled a hearing on June 8 to address the legal predicament and has instructed Schwartz and the law firm Levidow, Levidow & Oberman to present their case as to why they should not be penalized. As of now, Levidow, Levidow & Oberman has not commented on the matter.