Skip to Main Content

Generative AI & Legal Research

A guide for students and faculty on using generative AI for legal research and writing

Prompt engineering

What is Prompt Engineering?

Prompt engineering is the practice of creating and refining text prompts to guide generative AI tools, most notably large language models, to generate desired outputs. It involves constructing specific and concise instructions for the model to process - the level of specificity in your prompt directly affects the type of output you’ll receive. Most LLMs will follow just about any guiding parameters you provide; you can tell it what tone it should use in its responses, how its responses should be formatted (bulleted list, paragraphs, etc.), and even tell it to assume a specific role for a given output. 

Prompts aren’t just a one-off, either; successful prompt engineering requires practice, patience, and a lot of experimentation and refinement. Think of a prompt as a conversation, “talking” to an LLM is essentially going through an iterative conversational process to refine your prompt and receive the best output possible. If an LLM’s output isn’t quite what you’re looking for, you can build on and refine its output by giving it feedback on what aspects it does well and what aspects it’s missing to create your desired result. 

Delayed Prompts

Prompts can also have a delayed output. LLMs will remember previous prompts from the same conversation, which can then affect its outputs later in the conversation. Always start a new conversation with every new, unrelated prompt to ensure that your past prompts aren’t affecting your outputs.   

For example, telling a chatbot "Summarize the following in 2 sentences: [text]" will guide the chatbot to summarize the provided text in no more than 2 sentences. If you want the chatbot to summarize more text in the same conversation and say something like "Summarize the following: [text]" the chatbot will more than likely still adhere to your earlier request of summarizing in 2 sentences or less unless specifically stated otherwise.

Prompts can also span over the entirety of your conversation with the LLM, like saying “From now on, do X with all of your answers,” where X can be anything from formatting all answers into bulleted lists or having it explain its reasoning for every answer it provides. 

Knowledge Limitations

If an LLM can’t produce an effective output because of its knowledge limitations, try to supply it the relevant new information it needs so it can provide a sufficient output. Oftentimes you won't have to repeat your original prompt, as the chatbot will remember it and can now incorporate the new information provided.

Crafting good prompts

As helpful as prompt patterns are at getting your desired output from an LLM, they aren't always necessary (see the "Legal Prompt Patterns" tab for more). Prompt engineering as a whole is about crafting good prompts that achieve your desired results. There's no one-size-fits-all method when it comes to crafting effective prompts; creating good prompts takes a lot of time and experimentation seeing what works and what doesn't.

Tips & Tricks:

  • Specify your goal and purpose (e.g., draft a motion, memo, etc.) 
    • Define exactly what you're looking for and want to achieve clearly and articulately
  • Be specific and use precise language
    • There's no such thing as too much detail when using an LLM, you want to be as specific as possible in your language to avoid potential misunderstandings or ambiguities 
    • Use relevant legal terminology where possible that aligns with the context of your prompt to ensure the LLM understands the full context and nuance involved - include clear jurisdiction, party names, key terms, points of law, etc.
  • Indicate your expected answer format
    • If you have a specific format in mind for the output be sure to include that in your prompt, whether you'd like the LLM's response to be in the form of a bulleted list, table, paragraphs, etc. 
  • State material facts for the LLM's response to consider
    • Include all relevant context and information that the LLM needs to consider in its output 
  • Refine with follow-up prompts to narrow or expand your query
    • If you don't get the answer you need/want with your initial prompt, ask a follow-up question or iterate on the LLM's initial response

Golden Rules of Prompt Engineering

When in doubt, remember the three golden rules of prompt engineering:

  • Clarity: keep your prompts clear, concise, and unambiguous
  • Specificity: don't leave room for interpretation; the more specific your prompt the better
  • Context: providing context helps the AI stay on task and understand the task better, especially for more complex tasks