Zero-shot prompting means you give the model only the task and input — no example outputs. The model relies entirely on its training (e.g. "Classify this as positive or negative"). It works well for common, well-defined tasks. Few-shot means you include 1–5 example input→output pairs in the prompt so the model can mimic format and style. Use few-shot when you need consistent structure (e.g. JSON, bullet lists) or when the task is niche and the model might otherwise guess wrong. Long context means you put a long document (or many retrieved chunks) in the prompt so the model can reason over it. Use it when the answer depends on full content; be aware of token limits and cost. Choosing zero vs few-shot vs long context affects accuracy, token usage, and latency.
Zero-shot
Few-shot
Zero-shot
No examples — just the task. The model relies on its training. Good for simple, well-defined tasks.
Few-shot
You give 1–5 example input→output pairs in the prompt. The model mimics the format and style. Good for consistent output format or niche tasks.
Example 1: Input: "Great day" → Output: positive
Example 2: Input: "Terrible service" → Output: negative
Now: Input: "It was okay" → Output: ?
Long context
You put a long document (or many chunks) in the prompt so the model can reason over it. Use when the answer depends on full content. Watch token limits and cost.
When to use which