Prompt engineering is the practice of creating and refining text prompts to guide generative AI tools, most notably large language models, to generate desired outputs. It involves constructing specific and concise instructions for the model to process - the level of specificity in your prompt directly affects the type of output you’ll receive. Most LLMs will follow just about any guiding parameters you provide; you can tell it what tone it should use in its responses, how its responses should be formatted (bulleted list, paragraphs, etc.), and even tell it to assume a specific role for a given output.
Prompts aren’t just a one-off, either; successful prompt engineering requires practice, patience, and a lot of experimentation and refinement. Think of a prompt as a conversation, “talking” to an LLM is essentially going through an iterative conversational process to refine your prompt and receive the best output possible. If an LLM’s output isn’t quite what you’re looking for, you can build on and refine its output by giving it feedback on what aspects it does well and what aspects it’s missing to create your desired result.
Prompts can also have a delayed output. LLMs will remember previous prompts from the same conversation, which can then affect its outputs later in the conversation. Always start a new conversation with every new, unrelated prompt to ensure that your past prompts aren’t affecting your outputs.
For example, telling a chatbot "Summarize the following in 2 sentences: [text]" will guide the chatbot to summarize the provided text in no more than 2 sentences. If you want the chatbot to summarize more text in the same conversation and say something like "Summarize the following: [text]" the chatbot will more than likely still adhere to your earlier request of summarizing in 2 sentences or less unless specifically stated otherwise.
Prompts can also span over the entirety of your conversation with the LLM, like saying “From now on, do X with all of your answers,” where X can be anything from formatting all answers into bulleted lists or having it explain its reasoning for every answer it provides.
If an LLM can’t produce an effective output because of its knowledge limitations, try to supply it the relevant new information it needs so it can provide a sufficient output. Oftentimes you won't have to repeat your original prompt, as the chatbot will remember it and can now incorporate the new information provided.
As helpful as prompt patterns are at getting your desired output from an LLM, they aren't always necessary (see the "Legal Prompt Patterns" tab for more). Prompt engineering as a whole is about crafting good prompts that achieve your desired results. There's no one-size-fits-all method when it comes to crafting effective prompts; creating good prompts takes a lot of time and experimentation seeing what works and what doesn't.
When in doubt, remember the three golden rules of prompt engineering: