Skip to main content
All CollectionsPrompt Engineering
Getting Started with Prompts: A Guide to Effective Prompt Engineering
Getting Started with Prompts: A Guide to Effective Prompt Engineering
Updated over a month ago

What Is Prompt Engineering?

Prompt engineering is the skill of crafting effective prompts to get the best possible outputs from generative AI models like OpenAI’s GPT-4o or Anthropic’s Claude 3.5 Sonnet.

A prompt is the input you provide to an already-trained large language model to direct it to execute and complete a specific task. The art lies in knowing how to compose your prompts to elicit high-quality, relevant responses that are aligned with your goals.

It's important to understand that with large language models, the training is already done; they don’t learn or remember information from one interaction to the next, which means prompts need to be self-contained, providing all necessary context and guidance within the prompt itself.

Effective prompts are clear, specific, and give the model the information and boundaries it needs to generate a successful output.

Basic Principles of Prompt Engineering

Be precise

Precision in prompting helps guide the AI to better quality outputs. Vague, open-ended prompts leave too much room for interpretation. Instead, be as specific as possible about what you want the model to do.

For example:

  • Vague: "Use the info provided to write an email telling them about their product."

  • Precise: "Use the product description and order summary above to compose an email that reminds the customer about the product they purchased and then guides them through how to get the most out of their new product."

The more precise version gives the AI clear direction on the purpose, content, and desired outcome of the email. Being specific keeps the model on track.

Use rich vocabulary

Using rich and detailed vocabulary not only emphasizes to the model what you want, it also helps the model to understand more specifically what you’re looking for. Think of rich vocabulary as another dimension of being precise. Vivid word choice adds color and nuance that helps the AI understand the style and tone you're aiming for.

For example:

  • Example 1:

    • Vocabulary-lacking: “Write a formal and really apologetic email.”

    • Vocabulary-rich: “Compose a sincere and deeply apologetic email that conveys genuine remorse and regret, using a tone of appropriate gravitas.”

  • Example 2:

    • Vocabulary-lacking: “Write an email to congratulate a new hire.”

    • Vocabulary-rich: “Compose a warm and welcoming email to congratulate the new hire at your company and show them how excited you are to work with them.”

The descriptive language provides much more signal to the model about how you want it to modulate its writing style to fit the circumstance. So don't be afraid to paint a picture with your words when writing prompts.

Maximizing brainpower

Language models have a fixed amount of "brainpower" to allocate per output generation, or per workflow action. Asking it to tackle multiple complex tasks at once means dividing up that brainpower.

For optimal results, break down compound tasks into discrete steps, with a separate focused prompt for each one. This allows the model selected in your chat or workflow action to concentrate on each subtask.

For example:

  • Multiple steps in one prompt: “Write a list of possible ideas for song titles and then write the song lyrics for the best song title”

  • Steps broken out into multiple prompts (or workflow actions):

    • Prompt 1: "Brainstorm a list of 10 possible titles for a song about the pain of heartbreak and lost love."

    • Prompt 2: "Given this list of possible ideas for song titles, identify the song lyrics for the best song title."

By separating out the steps, you allow the model to focus its full faculties on each task in sequence. The result is higher quality across both the title options and selected song lyrics.

Also, be mindful that making the AI infer missing information depletes its available brainpower. While human communication relies heavily on inference, with AI it's better to err on the side of stating things explicitly, even if it feels redundant. Use labeling and terminology consistently throughout your prompt.

For example:

  • Inconsistent: “Write an email to a prospect of ours. Make it clear you would like to schedule a follow-up with the recipient.”

  • Consistent: “Compose an email to a prospect of ours. Make it clear you would like to schedule a follow-up with the prospect.”

Consistent labeling will ensure that the model executing your prompt will focus on the most important elements.

Output Structure

To understand why prompting for output structure is crucial to using large language models (LLMs) in your day to day operations, it’s important to remember that by default, large language models are fundamentally probabilistic by design.

LLMs operate based on predicting the likelihood of a sequence of words. These models are trained on extensive datasets consisting of text data from a vast corpus of data.

When LLMs are trained, the model learns to predict the probability distribution of the next word in a sequence, given the words that come before it. In other words, this means that when you give your model any prompt, the model continuously predicts the next best word over and over again given the words before it.

This means that given the same input more than once, the model may provide slightly different outputs each time, and thus underscores the importance of providing guidance on how the model should structure the output in order to be functional and usable by you and your team.

There are two ways to instruct the model to structure its output:

  • Implicit structure: Providing a loose guideline or description of the desired structure to adhere to for output.

  • Explicit structure: Providing a blank template that demonstrates the rigid, specific guidelines to adhere to for the output.

Let's take a look at some examples of implicit structure and explicit structure for various common tasks:

  • Data Extraction: Extracting items or elements from a specific piece of content or unstructured data.

    • Implicit:
      Extract each key element you find from the source text in a bulleted list

    • Explicit:
      Extract only four key elements from the source text:
      Property 1: [ ]
      Property 2: [ ]
      Property 3: [ ]
      Property 4: [ ]

  • Summarization: Distilling a large amount of content or data into a shorter version/brief

    • Implicit:
      A two sentence summary followed by a bulleted list of supporting details

    • Explicit:

      Use the following output structure for your summarization:
      Summary: [ A two sentence summary]
      [bulleted list of each supporting detail you found]

  • Creative Writing: Writing short form or long form content (e.g. a blog post or thought leadership content)

    • Implicit:
      The blog post you write should be between 800-1000 words

    • Explicit:

      Use the following output structure for your blog post:
      [One paragraph Introduction]
      [Three subsections two paragraphs each]
      [One paragraph conclusion, containing a short CTA]

Adding Examples

Examples can greatly increase the quality of your output as well. When you’re providing examples, it’s great to provide between one and three examples of both the inputs a specific action might see, and the outputs that were composed as a result of completing the task with those inputs.

This allows the model to view how you or someone on your team completed a process by taking your inputs and transforming them into usable, production-ready published outputs.

A (your inputs) → through your process all the way to → B (your desired outputs)

You can use the following bracket notation to configure examples in your prompts as follows:

<examples>

Inputs: [example inputs for the task if available]

Output: [matching example outputs]

</examples>

We recommend that if you need a boost in output quality, using one to three examples should do the trick. In some cases, more may be necessary but it’s important that you test this first. See testing workflows for more information.

Causative Language

Causative language can change the way the AI uses a resource in your prompt, either by sticking more closely to that resource, or by transforming the resource more.

Integrative Language

If you find the AI isn’t utilizing something you gave it, try using more integrative language.

Using integrative language means closely weaving the desired elements of your prompt into your instructions. You can do this by changing the structure of your sentences, and using syntax that makes doing the task dependent on the elements you want incorporated.

For example:

  • Non-integrative instructions: “Above are my class notes about meditation. Write a blog post about the benefits of meditation.”

  • Integrative instructions: “Use my class notes above to draft a blog post about the benefits of meditation.”

Transformative Language

If you find the AI is sticking too closely to something you gave it, using transformative language can help.

Using transformative language means explaining to the model how it should transform the elements of your prompt, rather than regurgitate or repeat them.

You can use verbs that emphasize how you want those elements used, and you can break the task up and create distance with multiple verbs. You can also identify specific aspects of the element that you want it to use, rather than speaking about the whole element itself.

For example:

  • Non-transformative language: “Use this poem to write a song about climate change.”

  • Transformative language: “Analyze the themes present in this poem and then integrate those themes into a song that emphasizes the urgency of addressing climate change. Focus especially on the imagery and emotions the poem evokes, and ensure that the song’s structure enhances the message.”

Composing Your Prompts

Here's a basic prompt template you can adapt for a variety of tasks:

  1. Context setting: Establish the scenario and goals for the task. Provide background information and introduce any key resources the model should utilize. Frame this section like you're briefing a human assistant.

  2. Provide resources: Include any reference material, data, or examples the model will need to complete the task. Clearly label each resource.

  3. Step-by-step instructions: Break down the process for the task into detailed steps. Specify any special requirements or constraints. Consider providing a template outlining the desired output format.

  4. Summarize the task: Conclude with a concise restatement of the overall objective. This gives the model a final touchpoint to refer back to.

Example prompt:You are an expert real estate marketer tasked with analyzing buyer personas to help the sales team better target prospects.

Buyer persona: Kelly Kondo, head of facilities for mid-size tech companies, 12 years of experience in office management and vendor relations.

Analyze the buyer persona and describe the following:

  • Key priorities and motivations

  • Primary challenges and pain points

  • Potential objections during the sales process

  • Strategies for building rapport and trust

Aim to equip the sales team with actionable insights they can use to tailor their approach to this persona.”

Additional Tips for Success

  • Frame your instructions positively in your prompts. Avoid "don't" phrasing and instead state what you do want.

  • Use vivid, evocative action verbs. "Captivate the audience," instead of "write an intro."

  • Integrate key context elements tightly into the instructions to signal their importance to the model.

  • Provide examples where helpful to demonstrate your expectations.

With practice, prompt engineering skills will enable you to get impressive results from generative AI tools. Investing time to learn effective prompting pays dividends in the quality and utility of any large language model's outputs.

Did this answer your question?