Grounding means tying the model’s answer to real data — documents, search results, or APIs — instead of relying only on its training. You retrieve relevant content (e.g. via RAG or web search), add it to the prompt, and instruct the model to answer only from that content and to cite sources. That reduces hallucination and lets users verify. Grounding is what RAG does: retrieve → inject into context → generate with citations. Some APIs offer built-in grounding (e.g. search over the web or your index) so you don’t have to build retrieval yourself.
Grounding: tie answers to real data
Grounding = RAG, web search, or other retrieval so the model answers from real sources instead of relying only on training. Reduces hallucination and supports verification.
Example: RAG = grounding
User asks "What is our refund policy?" You search the policy doc (or vector DB), get the relevant chunk, put it in the prompt, and ask the model to answer using only that text and cite the section. The answer is grounded in your data, not the model’s memory.