At last year’s CodeX FutureLaw conference, shortly after OpenAI’s GPT-4 had been unleashed on the world, a group of professionals doing some of the most cutting-edge AI work in legal gathered at an invite-only workshop to discuss the future of generative AI in the profession. Among the topics debated was whether the most fruitful approach for domain-specific generative AI in the legal industry was to build a legal large language model (LLM) from scratch or to fine-tune existing models to focus on legal work.

Fast forward nearly a year later, and many have focused on perfecting the fine-tuning approach, while building from scratch proved to be a much harder endeavor.

Share.
Exit mobile version