AI Knowledge Agent Day.
RAG over your docs, with citations.

1 day, on-site Up to 8 attendees Pricing on request UK, US, Australia

A hands-on intensive for ops, technical, or knowledge teams that want to replace "ask Sarah from Operations" with a knowledge agent that cites the source. By 5pm the team has a working RAG agent indexing your Notion, Confluence, Drive, or SharePoint, answering internal questions with traceable citations to the source pages.


Three shapes the team chooses between.

Internal HR knowledge agent

Answers "what is my parental leave allowance" or "what is the expense policy" with cited internal handbook references.

Engineering or product knowledge agent

Indexes architecture docs, decision records, and runbooks. Answers "how have we solved X before" with cited internal docs.

Customer support knowledge agent

Sits inside your support tools, answering "how have we resolved this before" against past tickets and the help centre.


If the agent cannot cite, the agent does not answer.

Most production knowledge agents fail the same way: they generate plausible-sounding answers and the team trusts them. Three months in, somebody asks for the source and the agent points to nothing. Trust collapses, the project dies.

We build with citation discipline by default. Every substantive answer traces back to a source page, paragraph, or row. The user can click through to verify, and the agent says "I do not know" when it cannot find a confident source. Same standard we apply on our paid Data AI builds, demonstrated in production on the AI Job Impact Calculator's methodology page.


Hour by hour.

AM

Morning

Connect to your source (Notion, Confluence, Drive, SharePoint). Set up chunking strategy. Choose the embedding model with the team based on cost / quality / privacy trade-offs. First index running before lunch.

MID

Midday

Wire in retrieval, reranking, and the citation flow. Test with the real questions your team asks every week. Tune retrieval until answers cite the right source documents, not adjacent ones.

PM

Afternoon

Build the "I do not know" pattern. Add usage logging so the team sees which questions get asked, which answers get cited. Set up retrieval evaluation for accuracy as the corpus grows.

END

End of day

Working knowledge agent connected to your source, deployed in your environment. Documentation, architecture diagram, monitoring setup. 30 days of email support included.


The full RAG stack.

Embedding

OpenAI Voyage Cohere Local Llama 3

Vector store

Pinecone pgvector Qdrant Azure AI Search

Reasoning

Claude API OpenAI Citation patterns

Sources & eval

Notion / Confluence Drive / SharePoint Retrieval-quality harness

Book Knowledge Agent Day.

Discovery call first. We confirm the source system, agree the date, and send a one-page brief two weeks before.