← AI Agents path

πŸ€– Using AI & AI Agents

Chapter 17 of 24

πŸ“‘ Chapter 17: Introduction to MCP

Model Context Protocol: what and why

MCP (Model Context Protocol) is an open standard so that any app (IDE, chatbot, agent framework) can talk to external services in a uniform way. Instead of each app implementing its own integration for "read files", "search the web", or "query my DB", an MCP server exposes these as tools, resources, or prompts. The client (your app) connects to one or more MCP servers, discovers what they offer, and passes that to the LLM. When the model wants to use a tool or read a resource, the client sends a request to the right server and gets back the result. So: one protocol, many servers β€” Cursor can use a filesystem server, a browser server, or your custom API server without custom code per integration.

MCP conversation flow

1
User asks in app (e.g. Cursor): "Search for flights to Tokyo"
2
App (MCP client) already has tool list from connected MCP server(s)
3
App sends user message + tool list to LLM
4
LLM returns: call tool search_flights(destination: Tokyo)
5
App sends tool call to MCP server β†’ server runs it (e.g. calls airline API) β†’ returns result
6
App sends observation back to LLM; LLM may call more tools or give final answer
7
User sees: "Here are 3 flights to Tokyo…"

MCP: Model Context Protocol

A standard way for an LLM app (client) to talk to external capabilities (server).

Client (App / IDE)
↔
MCP Server

What the server can expose

  • MCP ServerExposes tools, resources, prompts to the client
  • ToolsActions the model can invoke (e.g. search, run code)
  • ResourcesRead-only data (files, docs) the model can read
  • PromptsPre-defined prompt templates from the server

Why it matters

Build one MCP server (e.g. "company docs") and any MCP-compatible app can use it. The LLM sees a consistent shape: tools have names and parameters; resources have URIs and content. That reduces integration work and keeps behavior predictable.