Every useful agent is built from four building blocks. Memory is what the agent "remembers": the current conversation and tool outputs (short-term) and, in advanced setups, a vector store or DB for facts across sessions (long-term). Prompting is the system prompt and user message: they define the agent’s role, rules, and the task. Tools are callable actions (search, calculator, run code, APIs); the model chooses which to call and with what arguments. Resources are read-only data (files, docs, DBs) the agent can pull in, often via RAG or MCP.
Components of an AI agent
Memory
Short-term = current context (recent messages, tool results). Long-term = vector store or DB for facts and preferences across sessions.
Example: Agent remembers 'user asked for London weather' in this turn; long-term stores 'user prefers Celsius' for future turns.
Prompting
System prompt (role, rules), user instructions, and optionally few-shot examples. This sets how the agent reasons and what it can do.
Example: System: 'You are a travel assistant. Always cite sources.' User: 'Best time to visit Japan?'
Tools
Functions the agent can call: search, calculator, run code, call APIs (e.g. weather, calendar). The model outputs a tool call; the app runs it and returns the result.
Example: Model returns get_weather(city='Paris'); app calls API, gets 18°C; result is sent back so the model can say 'It’s 18°C in Paris.'
Resources
Read-only data the agent can access: files, docs, databases. Often exposed via RAG (retrieve chunks) or MCP resources.
Example: Agent needs the company policy doc; app retrieves relevant chunks from a vector DB and adds them to the prompt.
Why each matters