Skip to Main Content

Generative AI & Legal Research

A guide for students and faculty on using generative AI for legal research and writing

Recently Updated

Research & Writing: News & Resources - 4/26/24


News & Commentary: GenAI & Legal - 4/23/24


News & Commentary: GenAI & Legal Practice - 4/22/24


News & Commentary: AI in the News - 4/19/24


Resources to View - 4/12/24


News & Commentary: AI Toolkits & Legislation - 4/4/24

Suggestions

Your input is welcome! Please email me with any comments, questions, or suggestions on different AI topics you'd like to see or learn about.

Technical Services Librarian

Welcome!

In today's rapidly evolving technological landscape, artificial intelligence (AI) has become an indispensable tool for legal professionals and scholars alike. Generative AI, in particular, has gained prominence for its ability to generate human-like text, making it a valuable asset in the legal research process.

This guide serves as a basic introduction to generative AI (GenAI) text generators, with a primary focus on ChatGPT (with other generative models to come!). It highlights some of their uses in legal research and writing, while also acknowledging their flaws and limitations.

What is Artificial Intelligence?

Most talk about AI today is actually referring to machine learning – a branch of AI and computer science that focuses on using data and sophisticated algorithms to imitate the way humans think and speak, so that's what we'll primarily focus on.

Machine learning allows practitioners to develop complex AI models that can “learn” from data patterns with no human direction. Machines programmed to learn from examples are known as neural networks – a sub-field of machine learning – based on the way their structure mimics the neural networks formed by the human brain. Neural networks are extremely powerful tools composed of millions of interconnected nodes able to recognize patterns and make decisions based on their training data. A common use of neural networks has been for classification purposes, like identifying if an image is of a dog or cat.

The concept and use of artificial intelligence has been around for decades, and you yourself have already interacted with a multitude of AI without realizing it – voice assistants like Alexa and Siri, and even many online customer service chatbots are founded on and powered by AI. 

So how does that lead us to generative AI? 

What is Generative AI?

Text-based generative AI (GenAI) models are based on Large Language Models (LLMs) and Deep Learning (DL) neural networks. LLMs are another type of neural network that are trained extensively with large volumes of text, so they can better understand what word is likely to come next in a sequence. More specifically, they are most commonly trained to solve many common language problems like text classification, question answering, document summarization, and text generation.

For example, if you typed the phrase “Mary had a little...,” a well-trained LLM could predict “Mary had a little lamb,” based on the popular nursery rhyme. Without extensive training, the model may only come up with random answers, none of which may include an animal. Essentially, the more data involved in the training of an LLM, the more nuanced it becomes, and the more likely it is to have the insight to predict what Mary had based on the nursery rhyme correctly. 

Okay...but what does Deep Learning mean in this context? Simply put, DL is a subfield of neural networks that allow AI models to process more complex patterns than traditional machine learning. They typically have many layers of “neurons” that allow them to learn such complex patterns (hence, ‘deep’ learning). There are two kinds of Deep Learning models: discriminative and generative. Discriminative models are typically used to classify or predict data and can learn the relationships between specific features of the data points, similar to the predictive AI described above. And that begs the question... 

How does Generative AI work?

Generative models take what they have learned from their training data and create something entirely new based on the information provided (i.e., prompts). The end result of a generative model's training is the creation of a statistical model based on all of the data it was given. When given a prompt, the GenAI uses its statistical model to predict what an expected response might be, which then generates new content. In essence, text-based GenAI models are really really good at predicting a sequence of words in a sentence based on the prompt provided.

For the purposes of this guide, we’ll largely be talking about ChatGPT - a text-based GenAI model that utilizes LLMs and DL. However, LLMs are just one type of generative model out there – there are also models for generating images (DALL-E), sound (AudioLM), and even video (Phenaki). 

If you'd like to learn more about AI in general, I highly recommend taking a look at IBM's What is Artificial Intelligence (AI)?

Hallucinations

One of the largest problems seen when working with LLMs are hallucinations, where the LLM provides false information in a given output. Hallucinations often appear completely plausible because the LLMs are designed to produce coherent and correctly worded/punctuated responses. The LLM has no ability to understand the underlying reality provided by the prompt; instead, it uses its statistical model in relation to the given prompt to create an output that it predicts best satisfies the prompt.

It’s important to remember that no matter how clever and human-like an LLM may appear, it is only mimicking human language and reasoning based on its training data.  

In general, the only true way to combat hallucinations is to continually fact-check the LLM’s responses. Generative text models like ChatGPT are excellent at tasks like document analysis or boosting your writing creativity, but they struggle when it comes to providing facts. It should also be noted that the more obscure a topic/task you ask the LLM, the more likely you are to receive a hallucination in response.  

Hallucinations & Legal Research

When it comes to legal research, some of the most common hallucinations are: 

  • Citations: GenAI tools have already been shown to fabricate extremely convincing legal citations, even going so far as to include papers or articles from real authors to support their responses. Asking the AI to fact-check itself can prove to be problematic too, as it will often repeatedly defend the legitimacy of its citation. When it comes to legal research, it is absolutely vital that all citations are double-checked by the user to prove their validity. 

  • Case facts: While LLMs are great at tasks like document analysis, they don’t always get the facts right. Occasionally, a given citation will lead to a real case, but the LLM may not describe the facts of the case correctly or may mix up facts from different cases. 

  • Legal doctrine: In some instances, LLMs have been known to generate inaccurate or outdated legal doctrines/principles, furthering the user’s need for additional research to check the accuracy of the LLM’s response.