An MCP server that bridges Claude with local LLMs running in LM Studio.
This tool allows Claude to interact with your local LLMs running in LM Studio, providing:
- Access to list all available models in LM Studio
- The ability to generate text using your local LLMs
- Support for chat completions through your local models
- A health check tool to verify connectivity with LM Studio
- Claude Desktop with MCP support
- LM Studio installed and running locally with API server enabled
- Python 3.8+ installed
- Clone the repository
git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
cd claude-lmstudio-bridge
- Run the setup script
chmod +x setup.sh
./setup.sh
- Follow the setup script's instructions to configure Claude Desktop
- Clone the repository
git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
cd claude-lmstudio-bridge
- Run the setup script
setup.bat
- Follow the setup script's instructions to configure Claude Desktop
If you prefer to set things up manually:
- Create a virtual environment (optional but recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install the required packages
pip install -r requirements.txt
- Configure Claude Desktop:
- Open Claude Desktop preferences
- Navigate to the 'MCP Servers' section
- Add a new MCP server with the following configuration:
- Name: lmstudio-bridge
- Command: /bin/bash (on macOS/Linux) or cmd.exe (on Windows)
- Arguments:
- macOS/Linux: /path/to/claude-lmstudio-bridge/run_server.sh
- Windows: /c C:\path\to\claude-lmstudio-bridge\run_server.bat
After setting up the bridge, you can use the following commands in Claude:
- Check the connection to LM Studio:
Can you check if my LM Studio server is running?
- List available models:
List the available models in my local LM Studio
- Generate text with a local model:
Generate a short poem about spring using my local LLM
- Send a chat completion:
Ask my local LLM: "What are the main features of transformers in machine learning?"
Use the included debugging tool to check your LM Studio connection:
python debug_lmstudio.py
For more detailed tests:
python debug_lmstudio.py --test-chat --verbose
"Cannot connect to LM Studio API"
- Make sure LM Studio is running
- Verify the API server is enabled in LM Studio (Settings > API Server)
- Check that the port (default: 1234) matches what's in your .env file
"No models are loaded"
- Open LM Studio and load a model
- Verify the model is running successfully
"MCP package not found"
- Try reinstalling:
pip install "mcp[cli]" httpx python-dotenv
- Make sure you're using Python 3.8 or later
"Claude can't find the bridge"
- Check Claude Desktop configuration
- Make sure the path to run_server.sh or run_server.bat is correct and absolute
- Verify the server script is executable:
chmod +x run_server.sh
(on macOS/Linux)
You can customize the bridge behavior by creating a .env
file with these settings:
LMSTUDIO_HOST=127.0.0.1
LMSTUDIO_PORT=1234
DEBUG=false
Set DEBUG=true
to enable verbose logging for troubleshooting.
MIT