A streamlit-based chatbot that answers questions based on a FAQ database using LangChain and Ollama for local LLM inference.
- 🤖 Local LLM inference using Ollama
- 💾 Persistent vector storage using Chroma
- 🔍 Semantic search for finding relevant answers
- 🌐 Interactive web interface using Streamlit
- 💬 Chat history with expandable Q&A pairs
- Python 3.8 or higher
- Ollama installed on your machine with the following models:
llama3.2
- for generating responsesnomic-embed-text
- for text embeddings
After installing Ollama, run these commands to pull the required models:
ollama pull llama3.2
ollama pull nomic-embed-text
- Clone the repository:
git clone https://github.com/deepak786/faq-chatbot.git
cd faq_chatbot
- Create a virtual environment and activate it:
python3 -m venv .venv
source .venv/bin/activate # On Windows, use `.venv\Scripts\activate`
- Install the required packages:
pip install -r requirements.txt
- Make sure Ollama is running on your machine:
ollama serve
- In a new terminal, start the Streamlit application:
streamlit run faq_chatbot.py