
RAG combines a large language model (LLM) with a vector database. At query time it:
- Retrieves the most relevant company documents
- Injects that context into the prompt
- Generates a grounded answer with citations
This lowers hallucinations, improves accuracy, and builds trust for enterprise use cases.