8000 GitHub - hycos1/ai-python-lab
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

hycos1/ai-python-lab

Repository files navigation

AI Python Lab - OpenRouter Client

A simple and easy-to-use Python client for the OpenRouter API that provides access to various AI models.

Installation

pip install ai-python-lab

Quick Start

Basic Usage

from ai-python-lab import OpenRouterClient

# Initialize client
client = OpenRouterClient()

# Simple chat
response = client.simple_chat("What is the meaning of life?")
print(response)

# Chat with conversation history
messages = [
    {"role": "user", "content": "Hello, how are you?"},
    {"role": "assistant", "content": "I'm doing well, thank you! How can I help you today?"},
    {"role": "user", "content": "Can you explain quantum computing?"}
]

response = client.chat(messages)
print(response)

Using Different Models

# Use a specific model
response = client.simple_chat(
    "Explain machine learning in simple terms",
    model="anthropic/claude-3-haiku"
)

# Set default model for the client
client = OpenRouterClient(default_model="openai/gpt-4")

Quick Functions

For simple one-off requests:

from ai-python-lab import quick_chat, ask_ai

# Quick chat function
response = quick_chat("Tell me a joke")
print(response)

# Even simpler alias
response = ask_ai("What's the weather like?")
print(response)

Error Handling

from ai-python-lab import OpenRouterClient
from ai-python-lab.exceptions import AuthenticationError, APIError, RateLimitError

try:
    client = OpenRouterClient()
    response = client.simple_chat("Hello")
except AuthenticationError as e:
    print(f"Authentication failed: {e}")
except RateLimitError as e:
    print(f"Rate limit exceeded: {e}")
except APIError as e:
    print(f"API error: {e}")

Getting Available Models

# Get list of available models
models = client.get_models()
for model in models:
    print(f"Model: {model['id']}")

API Reference

OpenRouterClient

Main client class for interacting with OpenRouter API.

Methods

  • 96B6 chat(messages, model=None, **kwargs) - Send chat completion request
  • simple_chat(message, model=None, **kwargs) - Send simple message
  • get_models() - Get available models

Parameters

  • messages: List of message dictionaries with 'role' and 'content'
  • model: Model to use (optional, uses default if not specified)
  • temperature: Sampling temperature (0.0 to 2.0)
  • max_tokens: Maximum tokens to generate
  • top_p: Nucleus sampling parameter
  • frequency_penalty: Frequency penalty (-2.0 to 2.0)
  • presence_penalty: Presence penalty (-2.0 to 2.0)
  • stop: Stop sequences

Quick Functions

  • quick_chat(message, api_key=None, model="openrouter/cypher-alpha:free", **kwargs)
  • ask_ai(message, **kwargs) - Alias for quick_chat

Exceptions

  • OpenRouterError - Base exception
  • APIError - API-related errors
  • AuthenticationError - Authentication failures
  • RateLimitError - Rate limit exceeded

Examples

Basic Chat Bot

from ai-python-lab import OpenRouterClient

client = OpenRouterClient()

while True:
    user_input = input("You: ")
    if user_input.lower() in ['quit', 'exit']:
        break
    
    response = client.simple_chat(user_input)
    print(f"AI: {response}")

Multi-turn Conversation

fromai-python-lab import OpenRouterClient

client = OpenRouterClient()
conversation = []

while True:
    user_input = input("You: ")
    if user_input.lower() in ['quit', 'exit']:
        break
    
    # Add user message to conversation
    conversation.append({"role": "user", "content": user_input})
    
    # Get AI response
    response = client.chat(conversation)
    print(f"AI: {response}")
    
    # Add AI response to conversation
    conversation.append({"role": "assistant", "content": response})

Using with Different Models

from ai-python-lab import OpenRouterClient

client = OpenRouterClient()

# Try different models
models = [
    "openrouter/cypher-alpha:free",
    "anthropic/claude-3-haiku",
    "openai/gpt-3.5-turbo"
]

question = "What is artificial intelligence?"

for model in models:
    try:
        response = client.simple_chat(question, model=model)
        print(f"\n{model}:")
        print(response)
    except Exception as e:
        print(f"Error with {model}: {e}")

Requirements

  • Python 3.7+
  • openai >= 1.0.0
  • requests >= 2.25.1

License

MIT License

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Support

If you encounter any issues or have questions, please open an issue on GitHub.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0