Web chat interface with mem0 integration for Ollama
mem0-Ollama is a web-based chat interface that integrates mem0 for memory management with Ollama for local LLM inference. This project enables context-aware conversations with local models through a responsive web UI.
- Memory Management: Context-aware conversations using mem0 vector storage
- Active/Inactive Memory States: Dynamic memory tracking for relevant context
- Web Interface: Responsive chat UI with memory visualization
- Ollama Integration: Works with any model available in Ollama
- Docker Support: Container-ready deployment with proper networking
-
Download the installation script:
curl -o install_windows.ps1 https://raw.githubusercontent.com/KhryptorGraphics/mem0-ollama/main/install_windows.ps1
-
Run the script as Administrator:
powershell -ExecutionPolicy Bypass -File install_windows.ps1
-
Follow the on-screen instructions to complete the installation.
-
Download the installation script:
curl -o install_ubuntu.sh https://raw.githubusercontent.com/KhryptorGraphics/mem0-ollama/main/install_ubuntu.sh
-
Make the script executable:
chmod +x install_ubuntu.sh
-
Run the script:
./install_ubuntu.sh
-
Follow the on-screen instructions to complete the installation.
-
Clone the repository:
git clone https://github.com/KhryptorGraphics/mem0-ollama.git cd mem0-ollama
-
Install dependencies:
pip install -r requirements.txt
-
Make sure Ollama is running locally or adjust the
OLLAMA_HOST
in config.py -
Run the application:
python main.py
-
Open your browser and navigate to http://localhost:8000
You can also run the application with Docker:
docker-compose up -d
This will start both the mem0-ollama service and the mem0 vector database.
The web interface provides two modes:
-
Memory-enabled Chat: Access at http://localhost:8000/
- Features memory management for context-aware conversations
- Visualizes active memories being used for context
- Maintains conversation history with memory persistence
-
Direct Ollama Chat: Access at http://localhost:8000/direct
- Direct communication with Ollama without mem0 integration
- Simpler interface for direct model testing
- No memory persistence between conversations
Adjust settings in config.py
to customize:
- Ollama host URL
- Default model
- System prompt
- Memory settings
- Server port
- Python 3.12
- Flask and required Python packages
- Ollama running locally or in a container
- mem0 vector database (included in Docker setup)
We welcome contributions! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.