LLM Prompting Techniques – DEV Community


Prompting is an evolving art! There are many prompting techniques that we can use to get the best out of large language models (LLMs). The LLM can respond differently depending on how we ask the LLM.

Let’s explore the prompting techniques.



Zero-shot prompting

Zero-shot prompting is the most common technique for us to interact with LLM. This is when users simply ask a question without providing examples. The user essentially relies on the LLM to understand the intent.

Prompt:
Who won the FIFA World Cup in 2018?

Response:
France won the 2018 FIFA World Cup.

Most of the time, zero-shot prompting is all you need.




Few-shot prompting

With Few-shot prompting, you provide a few examples to provide the context to LLM.

Prompt:
Give a short headline for each soccer match result:
Team A 3–1 Team B → Team A outclasses Team B in 3–1 win
Team C 0–0 Team D → Team C and Team D battle to goalless draw
Team E 2–1 Team F → ?

Response:
Team E edges past Team F with narrow 2–1 victory




Chain-of-thought prompting

Chain-of-thought (CoT) prompting is a way of asking a model to solve problems step by step. This makes the answer clearer, and you can see how the model reached its conclusion.

Prompt: A soccer team scored 1 goal in the first half and 2 goals in the second half. Their opponent scored 2 goals total. Who won? Show your steps.

Response:
First half: Team A scored 1 goal.
Second half: Team A scored 2 more goals.
Total for Team A = 1 + 2 = 3 goals.
Opponent = 2 goals.
Since 3 > 2, Team A won the match.

Obviously, this is a very basic example but you can see whether this is going.




Tree-of-thought prompting

Tree-of-Thought (ToT) prompting is an extension of Chain-of-Thought. Here, instead of following one straight line of reasoning (step by step), the model explores multiple possible reasoning paths. At each step, it considers different options, evaluates them, and then chooses the best path forward.

This is useful for decision-making, planning, and creative problem solving, where more than one possible solution exists.

Prompt:
Your team is losing 1–0 with 10 minutes left. Suggest different strategies to equalize, explain pros and cons, then choose the best one.

Response:
Branch 1: Attack aggressively
Pros: Higher chance of scoring quickly.
Cons: Risk of conceding another goal on counterattack.

Branch 2: Play through midfield patiently
Pros: Keeps possession, builds pressure.
Cons: Time may run out before creating chances.

Branch 3: Bring on fresh attacking substitutes
Pros: New energy can change the game.
Cons: Substitutes may need time to adapt.

Decision:
The best option is Branch 3 — bring on attacking substitutes, combine with more aggressive play, to maximize scoring chances while balancing risks.




Persona prompting

It’s like Zero-shot prompting but additionally you assign a persona to the prompt.

Prompt: You are a soccer coach giving halftime advice. My team is losing 2–0. What should I tell the players?

As you can see, it’s really zero-shot prompting with additional persona related context.




Conclusion

In practice, prompting techniques are tools to get the best results from large language models. Zero-shot prompts let you ask questions directly, while few-shot prompts show the model examples so it can follow a pattern. Chain-of-thought prompting helps the model reason step by step, making its answers clearer and easier to follow. Other approaches—like persona prompting or asking the model to consider multiple options—help guide responses, keep them structured, and improve reliability. Choosing the right technique depends on your goal: whether you want accurate answers, creative output, or transparent reasoning.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *