This repository contains a collection of experimental programs exploring different AI technologies and APIs.
A modular interactive chat application that uses the Ollama API with a music lover persona. Features include:
- Interactive command-line interface with command history
- Up/Down arrow key navigation through previous commands
- Streaming responses from the Ollama API
- Uses the
gemma3:4b
model by default - Implements a knowledgeable music enthusiast personality
- Enhanced error handling and connection verification
- Configurable timeout settings
- Intelligent music-focused query enhancement
- Genre-specific terminology and context
- Music theory integration
- Debug mode for query transformations
The chat application includes sophisticated music-focused enhancements:
-
Genre Recognition & Context
- Automatically detects musical genres in queries
- Adds genre-specific context and terminology
- Supports: Classical, Jazz, Rock, Electronic, Hip-Hop, Folk
- Example: "Tell me about power chords" → "Tell me about power chords in the rock style"
-
Music Theory Integration
- Enhances queries with relevant music theory concepts
- Categories include: Tempo, Dynamics, Structure, Harmony, Rhythm
- Adds appropriate theoretical context based on genre
- Example: "How do you write a chorus?" → "How do you write a chorus (considering aspects like verse)?"
-
Enhanced Music Terminology
- Converts casual terms to precise musical language
- Maintains natural conversation flow while adding precision
- Examples:
- "song" → "musical piece"
- "band" → "musical group"
- "singer" → "vocalist"
-
Intelligent Query Processing
- Adds musical context to non-musical queries
- Preserves original meaning while adding musical perspective
- Examples:
- "What makes summer special?" → "From a musical perspective, what makes summer special?"
- "Tell me about the 1960s" → "Tell me about the musical aspects of the 1960s"
.
├── src/
│ └── ollama_chat/
│ ├── __init__.py
│ ├── app.py # Main application
│ ├── command_history.py # Command history management
│ ├── interactive_prompt.py # Interactive CLI with arrow navigation
│ └── ollama_client.py # Ollama API client
├── tests/
│ └── ollama_chat/
│ ├── conftest.py
│ ├── test_app.py
│ ├── test_command_history.py
│ ├── test_interactive_prompt.py
│ └── test_ollama_client.py
├── requirements.txt
└── setup.py
- Python 3.7+
- Ollama server running locally (default port: 11434)
- Linux-based system (uses tty module for terminal interaction)
gemma3:4b
model installed in Ollama (or another model of your choice)
- Install Ollama by following the instructions at Ollama's website
- Start the Ollama server:
systemctl start ollama # If using systemd
# or
ollama serve # If running manually
- Pull the required model:
ollama pull gemma:3b
- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Unix/Linux
- Install the package in development mode:
pip install -e ".[dev]"
The simplest way to run the chat:
ollama-chat
Run directly with Python:
python -m ollama_chat.app
You can customize the model and server:
# Use a different model
ollama-chat --model codellama
# Connect to a remote Ollama server
ollama-chat --host http://remote-server:11434
# Set custom timeout
ollama-chat --timeout 30
# Enable debug mode to see query enhancements
ollama-chat --debug
- The application will first verify the connection to Ollama and check model availability
- Once connected, you can start chatting about music!
- Use the up and down arrow keys to navigate through your command history
- Your queries will be automatically enhanced with musical context
- Try different types of questions:
- Direct music questions: "What makes a good melody?"
- Genre-specific: "Explain jazz improvisation"
- Theory-focused: "How do dynamics work in classical music?"
- General topics: "How does weather affect mood?" (will add musical perspective)
- Use debug mode to see how your queries are enhanced
- Press Ctrl+C to exit
Here are some examples of how the chat enhances your queries:
-
Genre-Specific Enhancement:
Input: "Tell me about power chords" Enhanced: "Tell me about power chords in the rock style"
-
Theory Integration:
Input: "How do you write a chorus?" Enhanced: "How do you write a chorus (considering aspects like verse)?"
-
Terminology Precision:
Input: "What makes a good song?" Enhanced: "What makes a good musical piece?"
-
Adding Musical Context:
Input: "What makes summer special?" Enhanced: "From a musical perspective, what makes summer special?"
If you encounter issues:
-
Connection Problems:
- Verify Ollama is running:
systemctl status ollama
- Check the server port:
netstat -tuln | grep 11434
- Ensure the model is installed:
ollama list
- Verify Ollama is running:
-
Model Issues:
- Try pulling the model again:
ollama pull gemma:3b
- Check model status:
ollama show gemma:3b
- Try pulling the model again:
-
Application Errors:
- Check Python environment is activated
- Verify all dependencies are installed
- Look for error messages in the output
- Try running with --debug flag to see query processing
The project includes a REST API for programmatic access to the Ollama (Gemma) model using FastAPI.
src/ollama_chat/fastapi_ollama.py
: Defines the FastAPI app and endpointssrc/ollama_chat/fastapi_run.py
: Entrypoint to run the FastAPI server with Uvicornsrc/ollama_chat/example_client.py
: Example Python client for the API
Start the FastAPI server with:
python src/ollama_chat/fastapi_run.py
The server will be available at http://localhost:8000
by default.
- Swagger UI: http://localhost:8000/docs
- ReDoc UI: http://localhost:8000/redoc
Send a chat message to the model and receive a response.
- Request Body:
{ "message": "Your message here", "system_prompt": "Optional system prompt" }
- Response:
{ "response": "Model's response as a string" }
Check if the Ollama server and model are available.
- Response:
{ "status": "ok", "message": "Connected to Ollama ..." }
See src/ollama_chat/example_client.py
for a usage example:
import requests
API_URL = "http://localhost:8000/chat"
payload = {
"message": "Hello, what can you tell me about the Gemma model?",
# "system_prompt": "You are a helpful assistant."
}
response = requests.post(API_URL, json=payload)
if response.ok:
print(response.json()["response"])
else:
print(f"Error: {response.status_code}")
print(response.text)
Run the test suite:
pytest tests/
Run with coverage:
pytest tests/ --cov=src/ollama_chat
command_history.py
: Implements a doubly-linked list for command historyinteractive_prompt.py
: Handles terminal input and command history navigation, including music-focused query enhancementollama_client.py
: Manages communication with the Ollama API, including error handling and connection verificationapp.py
: Ties everything together into a cohesive application