8000 GitHub - deepak786/faq-chatbot: A streamlit-based chatbot that answers questions based on a FAQ database using LangChain and Ollama for local LLM inference.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

deepak786/faq-chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FAQ Chatbot

A streamlit-based chatbot that answers questions based on a FAQ database using LangChain and Ollama for local LLM inference.

Features

  • 🤖 Local LLM inference using Ollama
  • 💾 Persistent vector storage using Chroma
  • 🔍 Semantic search for finding relevant answers
  • 🌐 Interactive web interface using Streamlit
  • 💬 Chat history with expandable Q&A pairs

Prerequisites

  • Python 3.8 or higher
  • Ollama installed on your machine with the following models:
    • llama3.2 - for generating responses
    • nomic-embed-text - for text embeddings

Installing Ollama Models

After installing Ollama, run these commands to pull the required models:

ollama pull llama3.2
ollama pull nomic-embed-text

Setup

  1. Clone the repository:
git clone https://github.com/deepak786/faq-chatbot.git
cd faq_chatbot
  1. Create a virtual environment and activate it:
python3 -m venv .venv
source .venv/bin/activate  # On Windows, use `.venv\Scripts\activate`
  1. Install the required packages:
pip install -r requirements.txt

Running the Application

  1. Make sure Ollama is running on your machine:
ollama serve
  1. In a new terminal, start the Streamlit application:
streamlit run faq_chatbot.py

About

A streamlit-based chatbot that answers questions based on a FAQ database using LangChain and Ollama for local LLM inference.

Topics

Resources

Stars

Watchers

Forks

Languages

0