Last summer, an otherwise routine personal injury action vividly demonstrated the pitfalls of generative AI. Facing an unfamiliar issue on a motion to dismiss, two New York attorneys turned to ChatGPT to assist their research, unaware of its propensity for the occasional “hallucination.” The chatbot then conjured up non‑existent cases to support their position, which the attorneys relied on in their briefing.
In an order sanctioning the attorneys, Judge Kevin Castel of the federal Southern District of New York dryly noted that the chatbot’s fictitious work product “shows stylistic and reasoning flaws that do not generally appear in decisions issued by United States Courts of Appeals,” and less charitably described its legal analysis as “gibberish.” Mata v. Avianca, 678 F. Supp. 3d 443, 453 (S.D.N.Y. 2023). Yet these citations were facially passable enough to fool two seasoned litigators once they were outside their usual area of expertise. That is a problem.