top of page
Clearer Thinking Team

Boosting Your Productivity With AI: Important tips for using LLMs

In this section, we’re going to help you get the most out of whichever LLM (Large Language Model) you’ve chosen to use, by telling you about prompt engineering.


You see, when you’re using an LLM, you are essentially engaging in a conversation with an AI. The way you phrase your prompt (that is, the input or question you provide) plays a crucial role in the quality and relevance of the response you receive. This is where prompt engineering comes in. It’s the art and science of crafting prompts that effectively guide the LLM to produce the most useful and accurate outputs for your specific needs.


Contents

(Click any of these links to go directly to that section)


Whole courses could be written on prompt engineering, and we're only going to scratch the surface here, but these are some key aspects of prompt engineering to consider as you get started:


1️⃣ Detail and reminders:


Make your prompt as detailed as you can. A good, detailed prompt goes beyond a simple question or statement; it also:


  • provides context 

  • clarifies the intent

  • specifies the desired format

  • specifies the desired level of detail


And, if your prompt has many components to it, it can be helpful to end it with a reminder of the most important components.


For example, instead of saying:



You could say:



Here’s another example. Instead of saying:



You could say:



Note that if you use A.I. to write content for you (e.g., for your blog), it's important to review it for accuracy and edit it to make sure it expresses ideas you agree with.



2️⃣ Avoid ambiguity:


If you ask an LLM "When was Bernoulli born?" without specifying which Bernoulli you're referring to (since there were several notable mathematicians with that last name), the model can’t know which one you mean, and it probably won’t ask for more detail. It’s more likely to just pick an interpretation of your question and answer that. This might be fine for some situations, but that ambiguity can result in problems if you're not careful! Solve this kind of problem by avoiding ambiguity in your prompts.




3️⃣ Few-shot vs. Zero-shot learning: 


In the context of prompt engineering, these terms refer to whether you include examples in your prompt or not. Few-shot learning typically improves the accuracy and reliability of LLMs, so it is a valuable method to use.


Few-shot learning: This is when your prompt includes a few examples (‘shots’) of the desired output, guiding the model to understand and replicate the format or style you need. For example: 



It makes sense to do this when:


  • Specificity is required: If your task requires a specific format, style, or type of response that the LLM might not understand from its training, providing a few examples can guide the model more precisely.


  • The task is uncommon or complex: For more niche or complex queries that the LLM might not have been extensively exposed to during its training (like the one in the example above), it can help to include a few examples.


  • Quality of response is crucial: In situations where the accuracy and quality of the response are particularly important, few-shot learning can help ensure that the LLM understands and adheres to your requirements.


Zero-shot learning: This is when you write a prompt without providing any specific examples (‘shots’), relying on the model's pre-existing knowledge and training to generate a response. For example:



It makes sense to do this when you’re asking:


  • General or broad Inquiries on often discussed topics: If your query is general or broad in nature, such as asking for a summary of a well-known concept or event, the LLM's pre-existing knowledge is often sufficient to generate a relevant response without the need for specific examples.


  • Creative output questions: If you’re asking for something creative like a poem, or a story, or a character design, then providing examples can end up limiting the LLM’s creativity and result in it imitating your examples more than you want.


  • Simple fact-based questions: When asking straightforward, fact-based questions, such as the definition of a term or a historical date, examples are unnecessary since the response required is direct and unambiguous.


  • Questions that require exploratory or diverse responses: If you're looking to explore a range of perspectives or diverse ideas on a topic, providing examples might narrow the LLM's responses, preventing you from seeing the full range of its capabilities.


  • And, of course, when you can’t think of any examples!



4️⃣ Iterate: 


LLMs are smart enough to remember previous prompts and replies, from earlier in your conversation (up to a point - if the chat becomes long enough they will start to forget portions of it). This means that as long as the chat doesn't get too long, you can iterate on the replies they give you, making them better and better each time. Here’s a simple example:




5️⃣ Ask them for prompts:


If you are struggling to write a prompt that suits your needs, or if you keep inputting prompts and not getting the results you want, you can always ask the LLM to help you write a better prompt! For example, here’s a case that starts with answering some questions from the LLM:



And then the LLM outputs a much better prompt!




1 Comment


Amelia Grace
Amelia Grace
2 days ago

An interaction with Large Language Models is often characterized by several pitfalls, especially concerning ethical issues and limitations when LLMs are used with massive data-driven assignments. In case you are dealing with large amounts of data, it is effective to combine LLMs with Spss Data Analysis Services Uk. While SPSS aids in defining the data, LLMs can be of help in the generation of conclusion. A moderate amount has the effect of increasing accuracy!

Like
bottom of page