A fully local AI research assistant that answers questions about AI, ML, and LLM research papers using Retrieval-Augmented Generation (RAG).
Built with Python, LangChain, Ollama, and Chroma, it retrieves relevant papers and generates insightful answers. Runs entirely offline, ensuring speed, privacy, and full local control.
- Local and private — runs fully on your machine.
- Uses RAG to retrieve and answer from 110+ AI/ML research papers.
- Supports Ollama models (LLaMA 3.2).
- Built with LangChain and Chroma for retrieval and embeddings.