DEV Community

Cover image for I Built a Personalized AI Tutor Using RAG – Here's How It Actually Works (And the Code)
Emmimal Alexander
Emmimal Alexander

Posted on

I Built a Personalized AI Tutor Using RAG – Here's How It Actually Works (And the Code)

I've been playing around with RAG setups for a while now, mostly for fun projects like chatbots that don't make stuff up. But recently, I thought: what if I used it for something useful, like an actual tutor?

You know how generic chat models are great at explaining things... until they confidently spit out wrong facts? Yeah, that's the hallucination problem. For real learning, especially with specific subjects, you need answers grounded in reliable sources.

That's where Retrieval-Augmented Generation (RAG) comes in. It grabs relevant bits from your own documents (textbooks, notes, whatever) before the LLM answers. No more guessing.

I followed this approach to build a simple but solid AI tutor in Python. It uses LangChain to glue everything together, Groq for super-fast LLM calls (Llama 3.1 runs like lightning there), Pinecone as the vector store, and Streamlit for a quick chat interface.

The whole thing runs locally-ish (except the APIs), and you can feed it any PDFs you want.

Repo here if you want to clone and try it: [https://github.com/Emmimal/ai-powered-tutor-rag-vector-db/]

(Full disclosure: this is based on a longer guide I wrote on my blog – link in the canonical URL below.)

Quick Setup

First, spin up a venv and install the deps:

pip install langchain langchain-groq langchain-community pinecone-client streamlit python-dotenv sentence-transformers pypdf
Enter fullscreen mode Exit fullscreen mode

Grab free API keys from Groq and Pinecone, toss them in a .env file.

Loading and Chunking Your Knowledge Base

Pick a subject – I used a Python intro textbook PDF for testing.

from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter

loader = PyPDFLoader("data/your_book.pdf")
docs = loader.load()

splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = splitter.split_documents(docs)
Enter fullscreen mode Exit fullscreen mode

Chunk overlap helps with context across splits.

Embeddings and Vector Store

I went with the open-source BAAI/bge-small-en-v1.5 – it's fast and good enough for English text.

from langchain.embeddings import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="BAAI/bge-small-en-v1.5")
Enter fullscreen mode Exit fullscreen mode

Then upsert into Pinecone (serverless index, easy peasy):
Create the index in their dashboard (384 dims for this model), then:

from langchain.vectorstores import Pinecone as LangchainPinecone
# ... Pinecone client setup ...

vectorstore = LangchainPinecone.from_documents(chunks, embeddings, index_name="ai-tutor-index")
Enter fullscreen mode Exit fullscreen mode

The RAG Chain Itself

Retriever pulls top 5 chunks, prompt tells the LLM to be a nice tutor.

from langchain_groq import ChatGroq
from langchain.prompts import ChatPromptTemplate

llm = ChatGroq(model_name="llama-3.1-70b-versatile", temperature=0.7)

retriever = vectorstore.as_retriever(search_kwargs={"k": 5})

template = """You are a kind, patient tutor. Use only the context below to answer.

Context: {context}

Question: {question}

Be encouraging and explain step by step."""

# Chain it up...
Enter fullscreen mode Exit fullscreen mode

Streamlit Chat UI

Super basic app.py – chat history, streaming responses. Feels like a real tutor app.
Run streamlit run app.py and you're chatting with your custom tutor.

What I Learned / Gotchas

  • Chunk size matters a ton. Too big → slow retrieval, too small → loses context.
  • Groq is stupid fast compared to other providers I tried.
  • For production, add evaluation (Ragas is cool) and caching.
  • Ethics bit: use diverse sources, don't log student struggles without consent.

If you're into education tech or just want a private tutor that knows your materials, give this a spin. I threw in a sample flow for multimodal stuff later (images/videos in the knowledge base).

Questions? Hit me in the comments – happy to help debug or suggest tweaks.

Original longer version with diagrams and ethics deep-dive:
https://emitechlogic.com/building-an-ai-powered-tutor-with-rag-and-vector-databases/

Top comments (0)