8000 GitHub - eliasjudin/claude-code-gemini: Run Claude Code on Gemini models
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Run Claude Code on Gemini models

Notifications You must be signed in to change notification settings

eliasjudin/claude-code-gemini

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Okay, let's create the claude-code-gemini project. This will adapt the claude-code-openai proxy to use Google's Gemini models via the google-genai SDK.

We'll reuse the structure and much of the logic, focusing on replacing the litellm calls with google-genai calls and implementing the necessary request/response translations.

claude-code-gemini/
├── README.md
├── pyproject.toml
├── server.py
└── tests.py

File: pyproject.toml

[project]
name = "claude-code-gemini"
version = "0.1.0"
description = "Proxy that translates between Anthropic API and Google Gemini API"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
    "fastapi[standard]>=0.115.11",
    "uvicorn>=0.34.0",
    "httpx>=0.25.0",
    "pydantic>=2.0.0",
    "google-generativeai>=0.7.0", # Changed from litellm
    "python-dotenv>=1.0.0",
    "pillow>=9.0.0" # google-genai uses this for image handling, good to include
]

[tool.uvicorn]
host = "0.0.0.0"
port = 8083 # Changed port to avoid conflict with OpenAI version
log_level = "info"
reload = true
  • Dependencies: Replaced litellm with google-generativeai. Added pillow as it's often used by google-genai.
  • Port: Changed the default port to 8083 to avoid conflicts if running both proxies.

File: README.md

# Claude Code but with Google Gemini Models 🧙‍♂️🔄♊️

**Use Claude Code with Google Gemini Models** 🤝

A proxy server that lets you use the `claude-code` CLI with Google's Gemini models like `gemini-1.5-pro-latest` and `gemini-1.5-flash-latest`. 🌉

## Quick Start ⚡

### Prerequisites

-   Google API Key (obtainable from [Google AI Studio](https://aistudio.google.com/app/apikey)) 🔑

### Setup 🛠️

1.  **Clone this repository**:
    ```bash
    git clone <your-repo-url> # Replace with your repo URL
    cd claude-code-gemini
    ```

2.  **Install UV** (if you don't have it):
    ```bash
     curl -LsSf https://astral.sh/uv/install.sh | sh
     source $HOME/.cargo/env # Or restart your shell
    ```

3.  **Create a virtual environment and install dependencies**:
    ```bash
    uv venv
    uv pip install -r requirements.txt # Or 'uv pip install .' if using pyproject.toml directly
    source .venv/bin/activate # Or '.\.venv\Scripts\activate' on Windows
    ```
    *(Note: If `uv pip install .` doesn't work initially, ensure your `pyproject.toml` is fully compliant or use `uv pip install -e .` for editable mode)*

4.  **Configure your API key**:
    Create a `.env` file in the project root with your Google API Key:
    ```dotenv
    GOOGLE_API_KEY=your-google-api-key

    # Optional: customize which Gemini models are used
    # BIG_MODEL=gemini-1.5-pro-latest
    # SMALL_MODEL=gemini-1.5-flash-latest
    ```

5.  **Start the proxy server**:
    ```bash
    uvicorn server:app --host 0.0.0.0 --port 8083 --reload
    ```
    *(The server will run on `http://0.0.0.0:8083`)*

### Using with Claude Code 🎮

1.  **Install Claude Code** (if you haven't already):
    ```bash
    npm install -g @anthropic-ai/claude-code
    ```

2.  **Connect to your proxy**:
    ```bash
    ANTHROPIC_BASE_URL=http://localhost:8083 claude
    ```
    *(Make sure the port matches the one the server is running on)*

3.  **That's it!** Your Claude Code client will now use Google Gemini models through the proxy. 🎯

## Model Mapping 🗺️

The proxy automatically maps Claude models to Gemini models:

| Claude Model             | Gemini Model (Default)       |
| :----------------------- | :--------------------------- |
| `claude-3-haiku-...`     | `gemini-1.5-flash-latest`    |
| `claude-3-sonnet-...`    | `gemini-1.5-pro-latest`      |
| `claude-3-opus-...`      | `gemini-1.5-pro-latest`      |
| Any other Claude model | `gemini-1.5-pro-latest`      |

### Customizing Model Mapping

You can customize which Gemini models are used via environment variables in your `.env` file:

-   `BIG_MODEL`: The Gemini model to use for Claude Sonnet/Opus models (default: "gemini-1.5-pro-latest")
-   `SMALL_MODEL`: The Gemini model to use for Claude Haiku models (default: "gemini-1.5-flash-latest")

Example `.env`:
```dotenv
GOOGLE_API_KEY=your-google-api-key
BIG_MODEL=gemini-1.5-pro-latest
SMALL_MODEL=gemini-1.5-flash-latest

How It Works 🧩

This proxy works by:

  1. Receiving requests in Anthropic's API format (/v1/messages) 📥
  2. Translating the requests to the format expected by the google-genai SDK 🔄
  3. Sending the translated request to the Google Gemini API using google-genai 📤
  4. Converting the Gemini API response back to Anthropic format 🔄
  5. Returning the formatted response (or stream) to the claude-code client ✅

The proxy handles both standard and streaming responses, maintaining compatibility with the claude-code client. 🌊

Limitations

  • API Differences: While the proxy aims for compatibility, subtle differences between Anthropic's Claude and Google's Gemini APIs might lead to variations in behavior or capabilities (e.g., exact tool use implementation, system prompt handling nuances, image input support details).
  • Error Mapping: Errors from the Gemini API are translated, but might not perfectly match Anthropic's error types.
  • Token Counting: Token counts are based on Gemini's counting and might differ from Claude's.

Contributing 🤝

Contributions are welcome! Please feel free to submit a Pull Request. 🎁

About

Run Claude Code on Gemini models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%
0