Skip to main content
Action: Generate Text
Updated over a week ago

Overview

The Generate Text action is the most versatile and flexible action, serving as a "pocket knife" that allows you to execute any custom prompt against any supported model. It gives you full control over how to leverage the model's capabilities to perform a wide variety of text generation tasks. As such, the Generate Text action is frequently used in many different types of Workflows where you need granular control over the model's inputs and outputs.

Usage Examples

  • Generate text summaries of a webpage - The generate text action can be used to create concise summaries of the content found on webpages. By providing the webpage URL or text as input, along with instructions to summarize the key points, the model will generate a brief summary capturing the main ideas.

  • Test model performance across several models - With the ability to specify which model to use, the generate text action allows you to test the same prompt across multiple models. This enables comparing and evaluating the performance of different models on a given task or use case.

  • Generate a 70 character title tag for a webpage - Need a succinct title tag for a webpage? Leverage the generate text action by inputting the page content and specifying a 70 character limit. The model will create a title tag summarizing the page within that character constraint.

  • Reformat the output of an existing action - The output from other actions can be passed into the generate text action to reformat it as needed. For example, you could make a long output more concise by setting a lower maximum length. Or expand a brief output by increasing the length parameters.

  • Run a high quality custom k-shot prompt - For advanced use cases requiring finetuned responses, you can input a custom k-shot prompt into the generate text action. This allows executing highly specialized prompts to obtain high quality, tailored outputs from the model.

  • Extract information from text - While not the primary use case, the flexibility of the generate text action means it can also be used for information extraction from text by crafting an appropriate prompt.

Inputs

  • Prompt - The specific task or instructions you are providing the model along with the data that is required to complete the task.

Blog Post: Review the provided Blog Post and generate a concise and compelling summary that could be used to inform a reader on the main topics that are discussed.
  • Background - The system prompt, context, or background you are providing the model on how the task should be performed and the role you are asking the AI to play.

You are a content marketer that excels at reviewing blog posts and generating summaries. Your objective is to generate a concise and compelling summary of the provided Blog Post. Your summary should use the following structure: [1-2 sentence summary of the key points discussed in the blog]. * [3-5 supporting bullets that reinforce the key points discussed]

Advanced Inputs

  • Model - The specific large language model to execute this request. Note that model selection is an advanced topic that should be considered carefully when designing new Workflows. Copy.ai is constantly adding new models and model providers.

  • Temperature - The creativity or inherent randomness of the model in generating your response. This is also an advanced topic but generally the lower the temperature the more consistent outputs between sequential executions of this step in the Workflow. The higher the temperature, the more "creative" a model will be which can sometimes come at the cost of the model drawing incorrect conclusions or deviating from instructions. Different prompts will perform best at different temperatures, but as a general rule, it is best to develop your Workflows at low temperatures and experiment with raising them as you solidify things.

  • Word or Character Limitation - If you would like back a particular length of response, this feature enables you to define if you prefer a character or word based limitation and governs the behavior of MIN and MAX inputs below:

    • Min Length - The minimum number of words or characters you would like back in the response. If no minimum length, this can be left blank.

    • Max Length - The maximum number of words or characters you would like back in the response. If no maximum length, this can be left blank.

The key inputs are the Prompt and Background. The Prompt contains the specific instructions and data for the model, while the Background provides context on how to approach the task and what role to play.

The advanced inputs like Model, Temperature, and Length Limitations allow more fine-tuned control but require careful consideration, as choosing the wrong settings can impact response quality.

Outputs

The Generate Text action does not produce a specific, predefined output. Instead, the output is the text generated by the language model based on the prompt, background context, and other inputs provided. The generated text can take any form depending on the task specified in the prompt, such as summaries, titles, reformatted text, or any other type of text output.

The output is highly variable and flexible, as the action is designed to leverage language models for a wide range of text generation tasks. The quality, coherence, and truthfulness of the generated text will depend on factors like the quality of the prompt, the amount of context and examples provided, the choice of language model, and the temperature setting used.

In essence, the output is simply the model's attempt at completing the task defined by the prompt, within the constraints set by the other input parameters like length limitations. There is no predefined structure or format for the output, as it aims to give users full flexibility in leveraging language models for custom text generation needs.

Troubleshooting

  • Word or character limitations not working for specific elements - The word or character limitation applies to the entire output generated by the model. If you want to impose a specific word or character limit on a portion of the output (e.g. generating bullet points of exactly 75 characters each), you need to generate each element separately using the word/character limit. Trying to limit part of the full output will not work.

  • Hallucinations or inaccuracies in the output - Hallucinations, where the model generates incorrect or made-up information, is a current limitation of large language models. This is more likely when the model lacks the full context or data needed to accurately perform the requested task. Common causes include:

    • Missing context/data the model needs to complete the task accurately

    • Incorrect temperature setting (lower temperatures tend to reduce hallucinations)

    • Missing or unclear instructions on how to perform the task

    To reduce hallucinations, ensure the model has all the necessary context/data, use a low temperature setting (e.g. 0.1) when testing prompts, and provide clear instructions in the prompt. Refer to prompt writing best practices for avoiding hallucinations.

Related Actions

  • Extract Information from Text - This action allows you to evaluate a block of text and extract a defined piece of information or answer a specific question based on that text. It complements the Generate Text action by enabling you to retrieve targeted information from existing text, rather than generating new text from scratch.

  • Extract Data from Text - This action is designed to evaluate a block of text and extract multiple structured pieces of information in a JSON format. Similar to Extract Information from Text, it focuses on extracting data from unstructured text sources. However, Extract Data from Text provides the extracted information in a more structured data format, making it suitable for scenarios where you need to work with the extracted data programmatically or feed it into other systems. This action can be used in conjunction with Generate Text to first generate text content, and then extract structured data from that generated content.

Did this answer your question?