Tips for Writing Effective Prompts

Introduction

As large language model (LLM) technology rapidly advances, the importance of prompt engineering is becoming increasingly evident. A prompt is essentially an input instruction given to AI, and it determines the kind of response the AI will generate. While LLMs are incredibly powerful and capable of handling a wide range of tasks, without well-crafted prompts, it's challenging to fully harness their potential. In this blog, we’ve compiled some tips on how to write effective prompts.

In the course of our research and development using LLMs, we came across the official documentation of Anthropic, an AI startup known for developing Claude, an LLM. Their documentation is very insightful, offering prompt engineering tips that can be applied not only to Claude but to any LLM. We’ve used this as a reference for our discussion.

What Happens When a Prompt is Poorly Constructed?

What issues arise from poorly written prompts? And why do these issues occur? Let’s briefly explore this. Poor quality prompts can lead to the following problems:

The reason behind these issues is that LLMs generate text by predicting the next word or sentence based on the previous ones. This means that they are simply generating sentences based on the data (large amounts of text, etc.) they were trained on. Therefore, providing clear and detailed input is essential to reducing the likelihood of errors.

What Makes a Good Prompt?

A good prompt is one that is designed to enable the AI to generate the expected response effectively. The following characteristics define a good prompt:

What Are the Specific Requirements for a Good Prompt?

When writing a prompt, it’s important to be mindful of the following elements. While it’s not mandatory to include all of them, you should tailor these elements to the task you want the LLM to perform.

  1. Task Context: Provide the LLM with a role or persona to help it understand the task at hand, which prevents it from going off track.
  2. Tone Specification: Specify the tone of the conversation. This allows you to control the choice of words to some extent.
  3. Background Data (Documents and Images): Also known as context, provide all necessary information for the LLM to complete the task. This maintains the specificity of the task and enhances its quality.
  4. Detailed Task Description and Rules: Add detailed rules or instructions regarding interaction with the user. This helps prevent errors such as hallucinations.
  5. Examples: Provide examples of the desired output to guide the LLM.
  6. Conversation History: If there’s any past interaction between the user and the LLM, provide it to the LLM to ensure a seamless continuation of the conversation. Many LLM cloud services like ChatGPT have this functionality by default.
  7. Immediate Task Description or Request: Give clear instructions for any sub-tasks derived from the assigned role or task.
  8. Think Step-by-Step: If necessary, ask the LLM to take its time or think step-by-step. Sometimes it’s effective to have the LLM handle tasks carefully and methodically, rather than all at once.
  9. Output Formatting: If the format of the output is crucial, for example, if the output is expected to be in JSON or Markdown format, provide detailed instructions regarding the format.
  10. Prefill: If the LLM is prone to making mistakes, provide a prefilled example along with the instruction to generate the continuation of a given section.

Examples of Good Prompts

Here are some techniques you can use when writing prompts.

For instance, when providing task context, it’s important to clearly specify the following:

Image source: Anthropic

Using XML tags is also an effective technique with the following advantages:

Image source: Anthropic

Another useful technique is to prefill part of the output if you want to receive it in JSON format. By doing so, you can prevent errors. By prefilling { at the beginning of the output, you can skip the preamble and have the LLM directly output the JSON object. This makes the output cleaner, more concise, and easier for programs to parse without additional processing.

Image source: Anthropic

Asking the LLM to think step-by-step is also an effective technique. It’s very simple—just include "Think step-by-step" in your basic prompt. Then, in a guided custom prompt, outline specific steps for the LLM to follow in its thinking process.

Before - Prompt Tuning:

Image source: Anthropic

After - Prompt Tuning:

Image source: Anthropic

Conclusion

Prompt engineering is a critical skill for maximizing the capabilities of LLMs. By designing prompts that are clear, specific, and relevant, you can obtain the useful and expected responses from AI. In this blog, we introduced the key requirements for good prompts along with some practical examples. As LLMs become increasingly integrated into our daily lives, now is the perfect time to learn how to write effective prompts.

References

Back