AI Agents path

🤖 Using AI & AI Agents

Chapter 6 of 24

⛓️ Chapter 6: Prompt Chaining

Multi-step prompts: output of one step becomes input to the next

Prompt chaining is when you run multiple prompts in sequence: the output of one step becomes the input to the next. For example, step 1: "Summarize this article" → summary. Step 2: "From this summary, extract three key points" (using the summary). Step 3: "Turn these points into a tweet" (using the bullets). Each step is a separate LLM call; you pass the previous result into the next prompt. Use chaining when a task naturally breaks into stages (summarize → extract → format) or when you want to check or refine an intermediate result before continuing.

Prompt chaining flow

Step 1: User or system — "Summarize this article"
LLM → summary (output 1)
Step 2: "From this summary, extract 3 bullet points" (summary = input)
LLM → bullet points (output 2)
Step 3: Use bullets for next step (e.g. generate email, fill form, or feed another agent)

Each step is one prompt; the output of step N is part of the input to step N+1. Good for complex tasks you can split into stages.

Example: When to chain

Good for: multi-step writing (draft → edit → tone), analysis (summarize → classify → recommend), or form-filling (extract entities → validate → map to schema). Avoid when one shot with a clear prompt is enough — chaining adds latency and cost.