Back to Home

toolkit

Internal Company Chatbot

RAG + LLM

RAGLLMSupabaseEmbeddings
01

What It Is

A retrieval-augmented generation chatbot that answers employee questions using the company's own documentation — not the open internet. HR policies, onboarding guides, product specs, SOPs. It gives accurate, sourced answers and eliminates the 'ask someone who knows' bottleneck that slows down every team.

02

How It Works

Ingestion

Company documents (PDFs, Notion pages, Google Docs) are parsed, chunked into overlapping segments, and stored in a vector database via Supabase pgvector.

Embedding

Each chunk is converted into a vector embedding using OpenAI or a local model. The query at runtime is embedded using the same model for semantic matching.

Retrieval

On each user question, the top-k most semantically similar chunks are retrieved from the vector store and assembled into a context window.

Generation

The LLM generates a grounded, source-attributed answer using only the retrieved context — no hallucination from general training data.

03

When to Use It

When the same questions get asked to the same people every week. When onboarding takes too long because institutional knowledge lives in scattered docs nobody can find. When a growing team is about to lose its collective memory to employee turnover.

Tech Stack

RAGLLMSupabaseEmbeddings