This project provides a RESTful API that answers natural language questions using a RAG (Retrieval-Augmented Generation) pipeline powered by:
- Google Gemini (
gemini-1.5-pro
orgemini-2.5-flash-preview
) - LangChain
- FAISS
- Confluence Cloud as a document source
- 🔐 Loads page content directly from a Confluence page using API Token authentication
- 📄 Splits and embeds content for semantic retrieval
- 🤖 Uses Gemini LLM to generate context-aware answers
- 🛡 Built with FastAPI
- 🔌 Ready for local deployment and Colab prototyping
git clone https://github.com/temolzin/pythonRAG
cd pythonRAG
python3 -m venv venv
# On Linux/macOS:
source venv/bin/activate
# On Windows:
venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env
#Replace main with your actual script/module name (e.g., rag_api:app).
uvicorn main:app --reload
Once running, go to:
http://localhost:8000/docs
From there, you can test the POST /query endpoint interactively with Swagger UI.
#Example request:
{
"query": "Who is responsible for changing the task status to 'READY FOR DEPLOY'?"
}