AI Agents path

🤖 Using AI & AI Agents

Chapter 5 of 24

🎯 Chapter 5: Prompting: Zero-Shot, Few-Shot & Long Context

When to use no examples, few examples, or long context

Zero-shot prompting means you give the model only the task and input — no example outputs. The model relies entirely on its training (e.g. "Classify this as positive or negative"). It works well for common, well-defined tasks. Few-shot means you include 1–5 example input→output pairs in the prompt so the model can mimic format and style. Use few-shot when you need consistent structure (e.g. JSON, bullet lists) or when the task is niche and the model might otherwise guess wrong. Long context means you put a long document (or many retrieved chunks) in the prompt so the model can reason over it. Use it when the answer depends on full content; be aware of token limits and cost. Choosing zero vs few-shot vs long context affects accuracy, token usage, and latency.

Zero-shot

User: "Classify: This product is great"
LLM (no examples) → "Positive"

Few-shot

User: examples + "Classify: Terrible service"
LLM (mimics format) → "Negative"

Zero-shot

No examples — just the task. The model relies on its training. Good for simple, well-defined tasks.

Task: "Classify this tweet as positive, negative, or neutral. Tweet: I love this product!"

Few-shot

You give 1–5 example input→output pairs in the prompt. The model mimics the format and style. Good for consistent output format or niche tasks.

Example 1: Input: "Great day" → Output: positive

Example 2: Input: "Terrible service" → Output: negative

Now: Input: "It was okay" → Output: ?

Long context

You put a long document (or many chunks) in the prompt so the model can reason over it. Use when the answer depends on full content. Watch token limits and cost.

When to use which

Zero-shot: simple, generic tasks. Few-shot: custom formats or rare tasks. Long context: Q&A over docs, summarization of long text, or when you need to inject many chunks (e.g. RAG).