Deep dive

A guide to generative AI and LLMs for in-house counsel

January 12, 2024
|
Melody Chen

Table of Contents

This is the second of our series of deep dives into AI for in-house lawyers. Learn about AI and ML basics in part 1.

In recent years, Generative AI has emerged as a transformative technology capable of generating various forms of content, including text, images, and music, based on its training data. This innovation can potentially revolutionize in-house legal's day-to-day work to make legal services more efficient and effective. According to the 2023 Wolters Kluwer’s Future Ready Lawyer Survey, 73% of lawyers expect to leverage generative AI tools in their legal work within the next year.

While Generative AI can be a valuable tool for legal professionals, it should be used judiciously, with the oversight and expertise of legal experts. In this blog post, we’ll cover what in-house lawyers and legal operations professionals should know about Generative AI and Large Language Models (LLMs), the benefits they bring to in-house legal teams, and the associated risks and limitations (and how to manage them.)

What is Generative AI?

Generative AI (also sometimes referred to as GenAI) is a type of AI technology that can generate new content, ranging from text to images and even music, based on the data it has been trained on. This technology is revolutionary in its ability to create original, realistic outputs that can mimic human creativity.

Generative AI can be a game-changer for law. It can draft legal documents, create legal briefs, and even generate detailed reports, saving time and resources for legal professionals. However, it’s important to note that while Generative AI can produce content, the oversight and expertise of legal professionals are crucial to ensure accuracy and relevance to specific legal contexts.

Want to learn about practical applications of AI for in-house legal? Register for the upcoming webinar with Streamline AI and Clearlaw.

What are Large Language Models (LLMs)? What is prompt engineering and fine-tuning of LLMs?

The LLM you’re more familiar with as an attorney is probably a Master of Laws degree. When talking about AI-related terms, LLMs are a type of Generative AI that specifically focuses on processing and generating human language. LLMs are machine learning models that aim to predict and generate text. These models are fed and trained on massive datasets (thus the “large” in “large language model”) that help them understand how characters, words, and sentences interact. 

However, if LLMs have only been trained on public data, it doesn’t always deliver optimal and accurate outputs, especially for the specialized needs of attorneys, where the stakes are high and the margin for error is slim. Prompt engineering and fine-tuning play a role in helping improve output to be more accurate, relevant, and sound. Prompt engineering involves crafting specific and detailed queries to guide the LLM towards more accurate and relevant responses. This is especially important for legal professionals, where the precision and applicability of information can be critical. Fine-tuning, on the other hand, involves training the model on a specific dataset to tailor its responses to the unique needs of a legal audience. 

An image of a robot holding a Master of Laws (LLM) diploma
An LLM with their LLM. Image generated by DALL-E, a text-to-image generative AI model developed by OpenAI.

Prompt engineering, as the first method, involves crafting specific and instructive prompts that guide the LLM to generate the desired outputs, like artfully steering a conversation. For in-house lawyers, this means framing questions in a way that aligns closely with legal terminology and context. 

For example, instead of asking a general question like “What are the implications of a breach of contract?” an in-house lawyer might ask, “What are the potential legal consequences under New York law for a material breach of a commercial lease agreement?” The specificity of the prompt helps the LLM focus on the relevant legal jurisdiction, the nature of the contract in question, and the specific type of breach, leading to a more precise and useful response. This method reduces the likelihood of ambiguous or generic answers, ensuring that the advice is tailored to the specific legal issue at hand.

Fine-tuning involves training the LLM on specialized legal texts, such as case law, statutes, and legal commentary, to transform a general-purpose LLM into a legal-specific AI that can offer more targeted and context-aware guidance. This process equips the LLM to understand and replicate the complex language and nuanced legal concepts. For instance, a model fine-tuned on US contract law would be more adept at providing insights on issues like compliance, governance, and liability. 

For example, just say an in-house attorney needs to draft a liability clause in the terms and conditions of a SaaS services agreement. The LLM that they use might be fine-tuned with a dataset comprising various forms of liability clauses from existing software service agreements, legal interpretations of these clauses from relevant case law, the legal department’s playbook on baseline positions, fallback clauses, and risk management strategies, and other training material on best practices and common pitfalls for liability clauses. That way, when the attorney asks, “How should I word a liability clause in a software service agreement to limit the company’s exposure to indirect damages?” the LLM’s response will provide targeted, relevant answers.

What are the benefits of LLMs for in-house legal departments?

Using the right LLM can offer several benefits to in-house legal departments, including improving the efficiency, accuracy, and overall quality of legal work. LLMs can play a large role in transforming the workflows of in-house legal teams by automating routine tasks, providing quick access to information, and assisting in complex legal analysis. The time and effort savings then enable lawyers to focus on more strategic and high-value aspects of their work.

Legal Research: LLMs can swiftly sift through vast amounts of legal data, including case law, statutes, and legal journals, to provide concise summaries or answers to specific legal queries. For instance, if an in-house lawyer needs to understand the nuances of intellectual property law in a specific jurisdiction, the LLM can quickly pull relevant information in minutes, saving hours of manual research.

Legal Request Intake: LLMs can be programmed to read, understand, and automatically categorize incoming legal requests based on extracted key fields or terms. This legal front door automated categorization and workflow routing plays a critical role in managing the flow of legal requests, ensuring they are promptly directed to the appropriate contact on the legal team. For example, when an LLM receives a request related to a sales contract, it can analyze the request text or attached contract to extract key information such as contract value, involved parties, or urgency. The legal department can set predetermined criteria for routing to ensure that each request is reviewed and approved by the right contact. The legal team might set the tool to automatically flag and route sales contracts with a value of over $100,000 to a specific team member who handles high-value contracts and/or the finance team for further review and approval.

An image of a robot sorting legal requests into cubbies
A robot sorting legal requests. Image generated by DALL-E.

Drafting Legal Documents: LLMs can assist in drafting various legal documents such as contracts, agreements, and memos. They can suggest language, format documents according to legal standards, and even flag potential legal issues. For example, when drafting a non-disclosure agreement, an LLM can suggest standard clauses and tailor them to specific needs, ensuring compliance and efficiency.

Contract Analysis and Management: In-house legal teams often deal with a high volume of contracts. LLMs can automate parts of the contract review process, identify key clauses, and flag and highlight areas of risk or non-compliance. This capability is particularly useful in due diligence processes where quickly reviewing numerous documents is crucial.

Compliance Monitoring: LLMs can keep track of changes in laws and regulations and alert the legal team about relevant updates. For example, if there are new data protection regulations in the European Union, the LLM can summarize these changes and suggest how they might impact the company’s operations.

Legal Training and Support for Non-Legal Staff: LLMs can provide basic legal guidance and training to employees in other departments, helping them understand compliance requirements, contractual obligations, and company policies. These LLMs can serve as a dynamic knowledge management tool and reduce the legal team’s workload in addressing routine or repetitive queries.

Predictive Analysis: Advanced LLMs can analyze past legal cases and outcomes to predict potential risks and outcomes of legal decisions or strategies. For instance, before pursuing litigation, an LLM can provide an analysis of similar cases and their outcomes, aiding in strategic decision-making.

Customized Legal Solutions: With fine-tuning, LLMs can be tailored to the specific needs and language of a particular legal department, increasing the relevance and accuracy of their outputs.

The current risks and limitations of Generative AI for legal professionals

As Generative AI and LLMs continue to evolve, their potential applications in law are vast. From automating routine tasks to providing analytical insights, these technologies could redefine how legal work is done. However, lawyers must understand the scope of AI capabilities and the ethical usage of AI and maintain ongoing human oversight.

An image of a Gen Z lawyer with an AI robot lawyer
GenAI: Today's tech wonder, tomorrow's sidekick in the life of a Gen Z lawyer. Image generated by DALL-E.

Generative AI hallucinations 

A critical aspect of Generative AI for legal professionals to understand and deal with are AI hallucinations. Hallucinations refer to instances where AI systems generate factually incorrect or nonsensical information despite appearing plausible or coherent. AI hallucinations occur when the AI system generates output based on patterns it has learned from the training data rather than factual accuracy. These outputs can be misleading because they are often presented with a level of confidence that belies their unreliability. An AI tool designed to provide legal advice might “hallucinate” by confidently presenting legal precedents or incorrect interpretations of law. Generative AI tools have been known to confidently cite case law that doesn’t exist. 

In 2023, a man named Roberto Mata sued Avianca Airlines, claiming a metal serving cart injured him during a flight. Avianca asked for the dismissal of the lawsuit based on the statute of limitations. Mata’s lawyer, Steven A. Schwartz, an experienced NY attorney who had practiced for three decades, used ChatGPT to perform legal research for his brief. He submitted a brief citing more than half a dozen relevant court decisions that ChatGPT gave him, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines. However, the airline’s lawyers and the judge could not find the decisions or case quotes cited — because they didn’t exist. ChatGPT had hallucinated the cases. When Schwartz asked ChatGPT multiple times to confirm its output, ChatGPT even confidently replied, “No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.” Schwartz admitted to unknowingly using these false AI-generated citations in his legal arguments, leading to sanctions by the judge. The case highlights the challenges and risks of depending on ChatGPT for legal research and emphasizes the need for caution and verification.

Finding the right Generative AI tool for in-house legal

Not all AI tools are, or should be, treated equally by in-house legal teams. General-purpose LLMs like OpenAI's ChatGPT, Google's PaLM 2, and Meta's Llama 2, while advanced and well-known (and making the news for passing the bar), are not specifically built or designed for legal work, and this lack of fine-tuning of these LLMs for legal-specific use cases can lead to significant inaccuracies. Researchers tested these popular models with over 200,000 legal questions and found that they hallucinated at least 75% of the time when answering questions about court rulings.

In one test where the researchers asked the models whether two court rulings agreed with each other, the researchers found that the general-purpose models performed no better than random guessing. This high error rate underscores the need for AI tools that are specifically developed and fine-tuned for specific legal use cases. It's not only about doing things faster or more efficiently; it's absolutely crucial to ensure in-house legal work is accurate and dependable, especially when a small mistake could have outsized impact.

Confidentiality and data security risks of Generative AI tools

According to research from LayerX on 10,000, at least 15% of workers are using ChatGPT and other Generative AI tools at work, and almost a quarter of visits to Generative AI apps include a data paste. However, there are risks to this usage. For example, sharing sensitive or confidential information with ChatGPT can pose a risk because inputs are not erased when users close the browser or turn off the computer. OpenAI says it may use data submitted to ChatGPT’s consumer services to train and improve its AI models unless the user actively opts out (and unlike users of ChatGPT Business, which uses the ChatGPT’s API data usage policies, which does not use end-user data to train ChatGPT models by default). A study conducted by researchers from the University of Washington, UC Berkeley, Google DeepMind, and more found that it’s possible to perform training data extraction attacks on ChatGPT and extract some of the exact data it was trained on — which has the potential to be confidential or sensitive information. In addition, in March 2023, ChatGPT was taken offline temporarily due to a bug that allowed some users to see titles from another active user’s chat history.

Legal work often involves handling sensitive and confidential information. There’s a risk that using Generative AI tools, especially cloud-based ones, could lead to data breaches or unauthorized access to sensitive business information. Ensuring that the AI system complies with data protection laws and is set up to comply with the organization’s confidentiality protocols is critical to reducing the risk of Generative AI tools.

Data control settings for ChatGPT
Data control settings for ChatGPT.

Biases in AI training data

As covered more comprehensively in part 1 of our series on AI, there is a risk of bias in AI systems. AI algorithms, if trained on biased data sets, can perpetuate and amplify these biases. For instance, if an AI tool used for juror selection is fed historical data that reflects racial biases, it may replicate these biases in its selections.

Overreliance on AI

Another significant challenge is the risk of over-reliance on AI. Lawyers must remember that AI is a tool to aid, not replace, human judgment. Over-reliance could lead to a lack of critical oversight, potentially resulting in legal oversights or ethical breaches, as well as undermining training and professional growth. Repetitive exposure to legal principles and scenarios is crucial for honing analytical skills and building intuition for legal strategy, which AI cannot replicate. If AI tools are excessively used for tasks that traditionally contribute to the development of a lawyer’s skill – such as legal research, case analysis, and document drafting – there’s a risk that lawyers may not develop the depth of legal knowledge and critical thinking skills that are essential for high-quality legal work. 

Furthermore, an overreliance on AI might lead to complacency, where lawyers could become less vigilant about verifying the accuracy and applicability of AI-generated information, thereby increasing the risk of legal oversights. In-house lawyers should balance the use of AI tools with active involvement in traditional legal tasks to maintain and grow their legal expertise.

Accuracy and reliability of AI-generated content

One of the primary concerns is the accuracy and reliability of AI-generated information. In the legal field, any misinformation or erroneous interpretation provided by an AI system could lead to flawed legal advice and significantly impact business decisions. This underscores the importance of setting up a human-in-the-loop system; legal professionals should always review and verify AI-generated content to ensure its correctness and applicability. Integrating human oversight not only enhances the reliability of the AI’s output but also leverages the nuanced understanding and judgment that experienced in-house lawyers bring, which AI in its current state cannot replicate. 

A cartoon showing robots replacing lawyers
There’s a lot of hype about robots taking over legal jobs, but sometimes AI robots can’t even spell correctly, as seen in this DALL-E generated cartoon.

How can lawyers mitigate the risks of leveraging AI in their work?

To mitigate the risks of AI, it’s essential for legal professionals to:

  • Maintain a Critical Eye: Always review and critically assess AI-generated content, especially in high-stakes situations like legal advice or case analysis. Always double-check references, including rules, regulations, and case law. 
  • Use AI as an Augmenter, Not a Decision-Maker: AI should be used to aid human decision-making and not as a standalone solution, especially in complex legal matters.
  • Continuous Training and Updates: Regularly update and train AI systems with diverse, accurate, and comprehensive data sets to reduce the likelihood of hallucinations.
  • Ethical and Responsible AI Use: Establish guidelines for the ethical use of AI, ensuring transparency and accountability in AI-driven processes

How to integrate AI into legal departments

The integration of AI in legal practices should be a carefully managed process. It’s important to incorporate the following steps to find the right use case and fit:

Identify where AI can add value

It starts with identifying the specific needs and areas where AI can add value. This involves analyzing existing workflows to pinpoint tasks that are time-consuming, prone to human error, or could be optimized through automation, such as legal request intake, document review, legal research, or contract management. Understanding these needs ensures that the selected AI tool not only aligns with the unique requirements of the legal team but also contributes to tangible improvements in efficiency and accuracy.

Make sure there’s buy-in and executive support for the AI tool

Securing buy-in and executive support is a critical step in successfully integrating AI tools within in-house legal teams. To do so, make sure you’re creating a compelling business case that clearly outlines the benefits of the AI tool. This involves demonstrating how the tool will improve efficiency, reduce costs, or enhance the quality of legal services. Showcase how it aligns with the broader business objectives and strategies of the organization. Quantifiable metrics, such as expected return on investment, cost savings, or time-to-value, can be particularly persuasive.

Having an executive sponsor is critical. An executive sponsor can champion the tool at the highest levels of the organization, ensuring that it receives the necessary resources and attention, from budget to implementation and cross-functional rollout. This sponsor should ideally understand both the capabilities of the AI tool and the specific needs of the legal team, and be able to articulate how the tool meets these needs to other executives and stakeholders. Since some legal AI tools are not just used by legal but also by business stakeholders, having an executive sponsor who understands the benefits can aid in getting top-down buy-in, adoption, and usage from partner teams such as sales, marketing, product, and IT. 

Consider the following evaluation criteria for AI tools for in-house legal

Once these areas are identified, the next step is to select the right AI tools. It’s essential to choose tools that are not only powerful but also align with the ethical standards of the legal profession. Some factors to evaluate include the solution's:

Accuracy and Reliability: Evaluate the tool’s ability to provide accurate and reliable information for the legal team’s purpose.

Data Security and Confidentiality: Given the sensitive nature of legal work, it’s essential to ensure that the AI tool adheres to stringent data security and confidentiality standards. Verify the tool’s compliance with data protection laws and assess its encryption methods, access controls, and data storage policies.

Compliance with Legal and Ethical Standards: The tool should comply with relevant legal and ethical standards. This includes data privacy laws and any regulations specific to the legal profession in your jurisdiction. The AI should not replace legal judgment but act as a support.

Specialization and Customization: Consider whether the AI tool is specialized for legal applications or can be customized to your specific legal context. Tools tailored for legal use are likely to be more effective than general-purpose AI solutions as they’ve often undergone fine-tuning for specific in-house legal purposes.

Ease of Integration: Assess how easily the tool can be integrated into existing workflows and systems. A tool that requires significant changes to current practices or isn’t compatible with existing software might not be practical.

User-Friendly Interface: The tool should have an intuitive interface that is accessible to all members of the legal team, regardless of their technical expertise. Usability is key to ensuring that the team can fully leverage the tool’s capabilities. No-code or low-code platforms empower in-house legal teams to create and customize solutions rapidly and efficiently without needing extensive technical expertise or high maintenance costs, improving their ability to automate routine tasks and streamline legal processes.

Cost vs. Benefit Analysis: Evaluate the cost of the tool against the potential benefits and efficiencies it offers. Consider not only the direct costs but also the potential savings in time and resources.

By thoroughly assessing these factors, in-house legal teams can make informed decisions about implementing AI tools that are effective, secure, and compliant with professional standards.

Make sure that the tools have adequate internal support

For in-house legal teams, ensuring adequate internal support for AI tools is crucial for successful implementation and utilization. This begins with designating a dedicated point of contact on the in-house legal team who will oversee the implementation process, ensuring a smooth integration into existing workflows. A legal operations manager or legal counsel might handle the day-to-day coordination and implementation. After implementation, It’s equally important to set up comprehensive training sessions tailored to the team’s specific needs. This helps foster strong buy-in and commitment to usage. 

Active and continuous engagement with the vendor’s customer success manager is also key; maintaining open lines of communication with them can help address any issues promptly and learn about the latest feature improvements. Getting continuous feedback from the legal team and providing it to the vendor ensures that the tool evolves in line with the team’s requirements. This comprehensive approach, combining internal coordination with proactive vendor engagement, is essential to fully leverage the capabilities of AI tools for in-house legal, ensuring they add real value and efficiency to the team’s operations.

ai

Read more

Work smarter

Scale your legal team's efficiency and effectiveness with modern workflow automation tools designed for in-house legal.

Request a demo