A Stanford pre-print research study, “Hallucination-free? Assessing the Reliability of Leading AI Legal Research Tools,” released last week found that generative AI-powered tools from legal research behemoths Thomson Reuters and LexisNexis hallucinated more than 17% of the time—significantly more often than the vendors let on.
But soon after the paper was released, much of the legal community was quick to reject it, claiming faulty methodology. The tumult highlights mistakes on Stanford’s part, certainly, but also brings to light the smoke and mirrors that have long existed within some of legal’s key vendor practices.