AI Since the launch of ChatGPT in November 2022, new use cases for generative AI and large language models (LLMs) in the enterprise have been emerging on a day-to-day basis. Yet, one of the most promising early adopters of the technology has been the legal industry.
In fact, research shows that 80% of law firm leaders believe that generative AI can be applied to legal work, with over 50% arguing that it should be.
At a glance, the reason for the interest is that LLMs give users the ability to process large data sets and derive insights in a matter of seconds. Legal practitioners can use LLMs to draft documents and briefs, research and analyze case law, or even study competitors and potential clients.
In short, AI offers legal firms the potential to optimize their entire efficiency.
The State of Llms in the Legal Field
So far, the legal industry has seen significant adoption of generative AI solutions, with large firms, including Baker Mckenzie, Reed Smith, and Allen & Overy, all beginning to experiment with the technology.
More broadly, legal professionals as a whole also appear enthusiastic about the capabilities of LLMs, with a 2023 LexisNexis Survey of lawyers, law students, and consumers finding that 84% of respondents in the legal profession believe generative AI will increase the efficiency of lawyers.
While the adoption of LLMs for legal use cases has been promising so far, there are some who are more reluctant to adopt due to concerns over the privacy of client data. For instance, Mishcon de Reya has banned staff from using ChatGPT entirely.
Although some professionals believe generative AI solutions should be banned at work, it appears that most are willing to experiment with the technology for the foreseeable future.
LLMs in Law and the Role of the Legal Copilot
One of the simplest ways that LLMs can provide value to legal practitioners is by acting as an automated assistant or copilot. Throughout the working day, a lawyer can ask an LLM to complete certain research tasks to reduce the amount of time they’d have to spend manually gathering information.
Ashley Binetti Armstrong, Assistant Clinical Professor at UConn School of Law, released a study at the start of this year, which argued that while ChatGPT displayed an “inability” to conduct effective legal research, it had proved effective at identifying logical flaws in contract clauses and creating prompts for legal writing assignments.
Patricia Thaine, co-founder and CEO of Private.AI, also agrees that analyzing contracts is one of the key use cases for lawyers:
“Generative AI and LLMs can help to reduce contract risk. They can analyze contracts and identify specific clauses, such as assignment and residual clauses, that need proactive management. By flagging these clauses, legal professionals can take necessary actions to mitigate potential risks and ensure compliance with contractual obligations.”
Summarizing legal documents, answering basic legal questions, and researching competitors or clients are all potential use cases for this technology, which can provide an augmented intelligence approach to law, combining human expertise with AI scalability to enhance productivity.
Barriers in the Road to Adoption: Hallucination
Even though legal firms are curious about experimenting with generative AI, there are some serious roadblocks along the road to adoption. Perhaps the most significant one is AI hallucination.
LLMs are notoriously prone to hallucination, i.e., making up facts, citations, and other information that could misinform the user. As Open AI warns users on its website, “ChatGPT may produce inaccurate information about people, places, or facts.”
If a lawyer were to request a summary of a piece of case law and received incorrect information from the LLM, then this could introduce serious legal and financial risks for the organization if left unchecked.
While AI hallucinations can be reduced over time by fine-tuning an LLM’s training data, legal practitioners can’t afford to blindly trust LLMs like GPT to deliver information with 100% accuracy.
As such, all information pertaining to legal cases, decisions, and citations should be fact-checked by a qualified professional.
What About Compliance?
Compliance is another significant concern for firms experimenting with generative AI. If a lawyer enters sensitive information or details about a client into a prompt, then that could constitute a breach of client confidentiality if that information is fed back to the software vendor (e.g. if using ChatGPT, this would be OpenAI).
This is a very real concern, as outside of the legal industry, we’ve already seen Samsung ban ChatGPT after a user inadvertently leaked sensitive data as part of their prompt.
The good news is that firms don’t have to swear off generative AI completely to protect data-client confidentiality. One way to address these concerns is for practitioners to avoid entering any information that isn’t already publicly available.
Another approach is to use a data-masking solution to deidentify or anonymize sensitive data so that it can’t be used to identify a client.
In any case, it’s highly recommended that firms complete a risk assessment before adopting generative AI as part of their workflows.
Working Smarter With AI
In its current form, generative AI is a tool with the power to augment legal professionals. While it can’t be used to replace qualified legal practitioners, it can be used to help lawyers, clerks, and other staff to conduct research more effectively and enhance their productivity.